HCI Bibliography Home | HCI Conferences | About IUI | IUI Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607080910111213-113-2

Proceedings of the 2013 International Conference on Intelligent User Interfaces

Fullname:Proceedings of the 2013 international conference on Intelligent user interfaces
Editors:Jihie Kim; Jeffrey Nichols; Pedro Szekely
Location:Santa Monica, California
Dates:2013-Mar-19 to 2013-Mar-22
Publisher:ACM
Standard No:ISBN: 978-1-4503-1965-2; ACM DL: Table of Contents; hcibib: IUI13-1
Papers:45
Pages:452
Links:Conference Website
  1. IUI 2013-03-19 Volume 1
    1. Keynote address
    2. Crowdsourcing and social media
    3. Agents and personalization
    4. Recommendation
    5. Novel input
    6. Common-sense and agents
    7. Emotion and user modeling
    8. Mobile applications
    9. Keynote address
    10. Visualization
    11. User studies
    12. Tactile and touch
  2. IUI 2013-03-19 Volume 2
    1. Student consortium
    2. Demonstrations
    3. Posters
    4. Workshops

IUI 2013-03-19 Volume 1

Keynote address

Duolingo: learn a language for free while helping to translate the web BIBAFull-Text 1-2
  Luis von Ahn
I want to translate the Web into every major language: every webpage, every video, and, yes, even Justin Bieber's tweets. With its content split up into hundreds of languages -- and with over 50% of it in English -- most of the Web is inaccessible to most people in the world. This problem is pressing, now more than ever, with millions of people from China, Russia, Latin America and other quickly developing regions entering the Web. In this talk, I introduce my new project, called Duolingo, which aims at breaking this language barrier, and thus making the Web truly "world wide."
   We have all seen how systems such as Google Translate are improving every day at translating the gist of things written in other languages. Unfortunately, they are not yet accurate enough for my purpose: Even when what they spit out is intelligible, it's so badly written that I can't read more than a few lines before getting a headache.
   With Duolingo, our goal is to encourage people, like you and me, to translate the Web into their native languages.

Crowdsourcing and social media

Leveraging the crowd to improve feature-sentiment analysis of user reviews BIBAFull-Text 3-14
  Shih-Wen Huang; Pei-Fen Tu; Wai-Tat Fu; Mohammad Amamzadeh
Crowdsourcing and machine learning are both useful techniques for solving difficult problems (e.g., computer vision and natural language processing). In this paper, we propose a novel method that harnesses and combines the strength of these two techniques to better analyze the features and the sentiments toward them in user reviews. To strike a good balance between reducing information overload and providing the original context expressed by review writers, the proposed system (1) allows users to interactively rank the entities based on feature-rating, (2) automatically highlights sentences that are related to relevant features, and (3) utilizes implicit crowdsourcing by encouraging users to provide correct labels of their own reviews to improve the feature-sentiment classifier. The proposed system not only helps users to save time and effort to digest the often massive amount of user reviews, but also provides real-time suggestions on relevant features and ratings as users generate their own reviews. Results from a simulation experiment show that leveraging on the crowd can significantly improve the feature-sentiment analysis of user reviews. Furthermore, results from a user study show that the proposed interface was preferred by more participants than interfaces that use traditional noun-adjective pair summarization, as the current interface allows users to view feature-related information in the original context.
Tailoring recommendations to groups of users: a graph walk-based approach BIBAFull-Text 15-24
  Heung-Nam Kim; Majdi Rawashdeh; Abdulmotaleb El Saddik
With the rapid popularity of smart devices, users are easily and conveniently accessing rich multimedia content. Consequentially, the increasing need for recommender services, from both individual users and groups of users, has arisen. In this paper, we present a graph-based approach to a recommender system that can make recommendations most notably to groups of users. From rating information, we first model a signed graph that contains both positive and negative links between users and items. On this graph we examine two distinct random walks to separately quantify the degree to which a group of users would like or dislike items. We then employ a differential ranking approach for tailoring recommendations to the group. Our empirical evaluations on the MovieLens dataset demonstrate that the proposed group recommendation method performs better than existing alternatives. We also demonstrate the feasibility of Folkommender for smartphones.
CatStream: categorising tweets for user profiling and stream filtering BIBAFull-Text 25-36
  Sandra Garcia Esparza; Michael P. O'Mahony; Barry Smyth
Real-time information streams such as Twitter have become a common way for users to discover new information. For most users this means curating a set of other users to follow. However, at the moment the following granularity of Twitter is restricted to the level of individual users. Our research has highlighted that many following relationships are motivated by a subset of interests that are shared by the users in question. For example, user A might follow user B because of their technology related tweets, but shares little or no interest in their other tweets. As a result, this all-or-nothing following relationship can quickly overwhelm users' timelines with extraneous information. To improve this situation we propose a user profiling approach based on the topical categorisation of users' posted URLs. These topics can then be used to filter information streams so that they focus on more relevant information from the people they follow, based on their core interests. In particular, we present a system called CatStream that provides for a more fine-grained way to follow users on specific topics and filter our timelines accordingly. We present the results of a live-user study that shows how filtered timelines offer a better way to organise and filter their information streams. Most importantly users are generally satisfied with the categories predicted for their profiles and tweets.
Recommending targeted strangers from whom to solicit information on social media BIBAFull-Text 37-48
  Jalal Mahmud; Michelle X. Zhou; Nimrod Megiddo; Jeffrey Nichols; Clemens Drews
We present an intelligent, crowd-powered information collection system that automatically identifies and asks targeted strangers on Twitter for desired information (e.g., current wait time at a nightclub). Our work includes three parts. First, we identify a set of features that characterize one's willingness and readiness to respond based on their exhibited social behavior, including the content of their tweets and social interaction patterns. Second, we use the identified features to build a statistical model that predicts one's likelihood to respond to information solicitations. Third, we develop a recommendation algorithm that selects a set of targeted strangers using the probabilities computed by our statistical model with the goal to maximize the over-all response rate. Our experiments, including several in the real world, demonstrate the effectiveness of our work.

Agents and personalization

An approach to controlling user models and personalization effects in recommender systems BIBAFull-Text 49-56
  Fedor Bakalov; Marie-Jean Meurs; Birgitta König-Ries; Bahar Sateli; René Witte; Greg Butler; Adrian Tsang
Personalization nowadays is a commodity in a broad spectrum of computer systems. Examples range from online shops recommending products identified based on the user's previous purchases to web search engines sorting search hits based on the user browsing history. The aim of such adaptive behavior is to help users to find relevant content easier and faster. However, there are a number of negative aspects of this behavior. Adaptive systems have been criticized for violating the usability principles of direct manipulation systems, namely controllability, predictability, transparency, and unobtrusiveness. In this paper, we propose an approach to controlling adaptive behavior in recommender systems. It allows users to get an overview of personalization effects, view the user profile that is used for personalization, and adjust the profile and personalization effects to their needs and preferences. We present this approach using an example of a personalized portal for biochemical literature, whose users are biochemists, biologists and genomicists. Also, we report on a user study evaluating the impacts of controllable personalization on the usefulness, usability, user satisfaction, transparency, and trustworthiness of personalized systems.
Automatic and continuous user task analysis via eye activity BIBAFull-Text 57-66
  Siyuan Chen; Julien Epps; Fang Chen
A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.
Agent metaphor for machine translation mediated communication BIBAFull-Text 67-74
  Chunqi Shi; Donghui Lin; Toru Ishida
Machine translation is increasingly used to support multilingual communication. Because of unavoidable translation errors, multilingual communication cannot accurately transfer information. We propose to shift from the transparent channel metaphor to the human-interpreter (agent) metaphor. Instead of viewing machine translation mediated communication as a transparent channel, the interpreter (agent) encourages the dialog participants to collaborate, as their interactivity will be helpful in reducing the number of translation errors, the noise of the channel. We examine the translation issues raised by multilingual communication, and analyze the impact of interactivity on the elimination of translation errors. We propose an implementation of the agent metaphor, which promotes interactivity between dialog participants and the machine translator. We design the architecture of our agent, analyze the interaction process, describe decision support and autonomous behavior, and provide an example of repair strategy preparation. We conduct an English-Chinese communication task experiment on tangram arrangement. The experiment shows that, compared to the transparent-channel metaphor, our agent metaphor reduced human communication effort by 21.6%.

Recommendation

Modeling discussion topics in interactions with a tablet reading primer BIBAFull-Text 75-84
  Adrian Boteanu; Sonia Chernova
CloudPrimer is a tablet-based interactive reading primer that aims to foster early literacy skills and shared parent-child reading through user-targeted discussion topic suggestions. The tablet application records discussions between parents and children as they read a story and leverages this information, in combination with a common sense knowledge base, to develop discussion topic models. The long-term goal of the project is to use such models to provide context-sensitive discussion topic suggestions to parents during the shared reading activity in order to enhance the interactive experience and foster parental engagement in literacy education. In this paper, we present a novel approach for using commonsense reasoning to effectively model topics of discussion in unstructured dialog. We introduce a metric for localizing concepts that the users are interested in at a given moment in the dialog and extract a time sequence of words of interest. We then present algorithms for topic modeling and refinement that leverage semantic knowledge acquired from ConceptNet, a commonsense knowledge base. We evaluate the performance of our algorithms using transcriptions of audio recordings of parent-child pairs interacting with a tablet application, and compare the output of our algorithms to human-generated topics. Our results show that words of interest and discussion topics selected by our algorithm closely match those identified by human readers.
Semi-automatic generation of recommendation processes and their GUIs BIBAFull-Text 85-94
  Hermann Kaindl; Elmar P. Wach; Ada Okoli; Roman Popp; Ralph Hoch; Werner Gaulke; Tim Hussein
Creating and optimizing content- and dialogue-based recommendation processes and their GUIs (graphical user interfaces) manually is expensive and slow. Changes in the environment may also be found too late or even be overlooked by humans. We show how to generate such processes and their GUIs semi-automatically by using knowledge derived from unstructured data such as customer feedback on products on the Web. Our approach covers the whole lifecycle from knowledge discovery through text mining techniques to the use of this knowledge for semi-automatic generation of recommendation processes and their user interfaces as well as their comparison in real-world use within the e-commerce domain through A/B-variant tests. These tests indicate that our approach can lead to better results as well as less manual effort.
Recommendation system for automatic design of magazine covers BIBAFull-Text 95-106
  Ali Jahanian; Jerry Liu; Qian Lin; Daniel Tretter; Eamonn O'Brien-Strain; Seungyon Claire Lee; Nic Lyons; Jan Allebach
In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., "clean and clear," "dynamic and active," or "formal," to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.
LinkedVis: exploring social and semantic career recommendations BIBAFull-Text 107-116
  Svetlin Bostandjiev; John O'Donovan; Tobias Höllerer
This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of "what-if" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.
Directing exploratory search: reinforcement learning from user interactions with keywords BIBAFull-Text 117-128
  Dorota Glowacka; Tuukka Ruotsalo; Ksenia Konuyshkova; kumaripaba Athukorala; Samuel Kaski; Giulio Jacucci
Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.

Novel input

Locating user attention using eye tracking and EEG for spatio-temporal event selection BIBAFull-Text 129-136
  Felix Putze; Jutta Hild; Rainer Kärgel; Christian Herff; Alexander Redmann; Jürgen Beyerer; Tanja Schultz
In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the user's workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.
Subtle gaze-dependent techniques for visualising display changes in multi-display environments BIBAFull-Text 137-148
  Jakub Dostal; Per Ola Kristensson; Aaron Quigley
This paper explores techniques for visualising display changes in multi-display environments. We present four subtle gaze-dependent techniques for visualising change on unattended displays called FreezeFrame, PixMap, WindowMap and Aura. To enable the techniques to be directly deployed to workstations, we also present a system that automatically identifies the user's eyes using computer vision and a set of web cameras mounted on the displays. An evaluation confirms this system can detect which display the user is attending to with high accuracy. We studied the efficacy of the visualisation techniques in a five-day case study with a working professional. This individual used our system eight hours per day for five consecutive days. The results of the study show that the participant found the system and the techniques useful, subtle, calm and non-intrusive. We conclude by discussing the challenges in evaluating intelligent subtle interaction techniques using traditional experimental paradigms.
Towards cooperative brain-computer interfaces for space navigation BIBAFull-Text 149-160
  Riccardo Poli; Caterina Cinel; Ana Matran-Fernandez; Francisco Sepulveda; Adrian Stoica
We explored the possibility of controlling a spacecraft simulator using an analogue Brain-Computer Interface (BCI) for 2-D pointer control. This is a difficult task, for which no previous attempt has been reported in the literature. Our system relies on an active display which produces event-related potentials (ERPs) in the user's brain. These are analysed in real-time to produce control vectors for the user interface. In tests, users of the simulator were told to pass as close as possible to the Sun. Performance was very promising, on average users managing to satisfy the simulation success criterion in 67.5% of the runs. Furthermore, to study the potential of a collaborative approach to spacecraft navigation, we developed BCIs where the system is controlled via the integration of the ERPs of two users. Performance analysis indicates that collaborative BCIs produce trajectories that are statistically significantly superior to those obtained by single users.
Real-time gait classification for persuasive smartphone apps: structuring the literature and pushing the limits BIBAFull-Text 161-172
  Oliver S. Schneider; Karon E. MacLean; Kerem Altun; Idin Karuei; Michael M. A. Wu
Persuasive technology is now mobile and context-aware. Intelligent analysis of accelerometer signals in smartphones and other specialized devices has recently been used to classify activity (e.g., distinguishing walking from cycling) to encourage physical activity, sustainable transport, and other social goals. Unfortunately, results vary drastically due to differences in methodology and problem domain. The present report begins by structuring a survey of current work within a new framework, which highlights comparable characteristics between studies; this provided a tool by which we and others can understand the current state-of-the art and guide research towards existing gaps. We then present a new user study, positioned in an identified gap, that pushes limits of current success with a challenging problem: the real-time classification of 15 similar and novel gaits suitable for several persuasive application areas, focused on the growing phenomenon of exercise games. We achieve a mean correct classification rate of 78.1% of all 15 gaits with a minimal amount of personalized training of the classifier for each participant when carried in any of 6 different carrying locations (not known a priori). When narrowed to a subset of four gaits and one location that is known, this improves to means of 92.2% with and 87.2% without personalization. Finally, we group our findings into design guidelines and quantify variation in accuracy when an algorithm is trained for a known location and participant.
Combining acceleration and gyroscope data for motion gesture recognition using classifiers with dimensionality constraints BIBAFull-Text 173-178
  Sven Kratz; Michael Rohs; Georg Essl
Motivated by the addition of gyroscopes to a large number of new smart phones, we study the effects of combining accelerometer and gyroscope data on the recognition rate of motion gesture recognizers with dimensionality constraints. Using a large data set of motion gestures we analyze results for the following algorithms: Protractor3D, Dynamic Time Warping (DTW) and Regularized Logistic Regression (LR). We chose to study these algorithms because they are relatively easy to implement, thus well suited for rapid prototyping or early deployment during prototyping stages. For use in our analysis, we contribute a method to extend Protractor3D to work with the 6D data obtained by combining accelerometer and gyroscope data. Our results show that combining accelerometer and gyroscope data is beneficial also for algorithms with dimensionality constraints and improves the gesture recognition rate on our data set by up to 4%.

Common-sense and agents

Mind the gap: collecting commonsense data about simple experiences BIBAFull-Text 179-190
  Jerry S. Weltman; S. Sitharama Iyengar; Michael Hegarty
In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing.
   We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.
Learning non-myopically from human-generated reward BIBAFull-Text 191-202
  W. Bradley Knox; Peter Stone
Recent research has demonstrated that human-generated reward signals can be effectively used to train agents to perform a range of reinforcement learning tasks. Such tasks are either episodic -- i.e., conducted in unconnected episodes of activity that often end in either goal or failure states -- or continuing -- i.e., indefinitely ongoing. Another point of difference is whether the learning agent highly discounts the value of future reward -- a myopic agent -- or conversely values future reward appreciably. In recent work, we found that previous approaches to learning from human reward all used myopic valuation [7]. This study additionally provided evidence for the desirability of myopic valuation in task domains that are both goal-based and episodic.
   In this paper, we conduct three user studies that examine critical assumptions of our previous research: task episodicity, optimal behavior with respect to a Markov Decision Process, and lack of a failure state in the goal-based task. In the first experiment, we show that converting a simple episodic task to non-episodic (i.e., continuing) task resolves some theoretical issues present in episodic tasks with generally positive reward and -- relatedly -- enables highly successful learning with non-myopic valuation in multiple user studies. The primary learning algorithm in this paper, which we call "VI-TAMER", is it the first algorithm to successfully learn non-myopically from human-generated reward; we also empirically show that such non-myopic valuation facilitates higher-level understanding of the task. Anticipating the complexity of real-world problems, we perform two subsequent user studies -- one with a failure state added -- that compare (1) learning when states are updated asynchronously with local bias -- i.e., states quickly reachable from the agent's current state are updated more often than other states -- to (2) learning with the fully synchronous sweeps across each state in the VI-TAMER algorithm. With these locally biased updates, we find that the general positivity of human reward creates problems even for continuing tasks, revealing a distinct research challenge for future work.
Westland row why so slow?: fusing social media and linked data sources for understanding real-time traffic conditions BIBAFull-Text 203-212
  Elizabeth M. Daly; Freddy Lecue; Veli Bicer
The advent of real-time traffic streaming offers users the opportunity to visualise current traffic conditions and congestion information. However, real-time information highlighting the underlying reason for tail-backs remains largely unexplored. Broken traffic lights, an accident, a large concert, or road-works reveal important information for citizens and traffic operators alike. Providing such information in real-time requires intelligent mechanisms and user interfaces in order to (i) harness heterogeneous data sources (volume, velocity, variety, veracity) and (ii) make derived knowledge consumable so users can visualize traffic conditions and congestion information making better routing decisions while travelling. This work focuses on surfacing relevant information and explaining the underlying reasons behind traffic conditions. To this end, static data from event providers, planned road works together with dynamically emerging events such as a traffic accidents, localized weather conditions or unplanned obstructions are captured through social media to provide users real-time feedback to highlight the causes of traffic congestion.
Curating and contextualizing Twitter stories to assist with social newsgathering BIBAFull-Text 213-224
  Arkaitz Zubiaga; Heng Ji; Kevin Knight
While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.

Emotion and user modeling

Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits BIBAFull-Text 225-234
  Robert Bixler; Sidney D'Mello
It is hypothesized that the ability for a system to automatically detect and respond to users' affective states can greatly enhance the human-computer interaction experience. Although there are currently many options for affect detection, keystroke analysis offers several attractive advantages to traditional methods. In this paper, we consider the possibility of automatically discriminating between natural occurrences of boredom, engagement, and neutral by analyzing keystrokes, task appraisals, and stable traits of 44 individuals engaged in a writing task. The analyses explored several different arrangements of the data: using downsampled and/or standardized data; distinguishing between three different affect states or groups of two; and using keystroke/timing features in isolation or coupled with stable traits and/or task appraisals. The results indicated that the use of raw data and the feature set that combined keystroke/timing features with task appraisals and stable traits, yielded accuracies that were 11% to 38% above random guessing and generalized to new individuals. Applications of our affect detector for intelligent interfaces that provide engagement support during writing are discussed.
Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments BIBAFull-Text 235-246
  Chiew Seng Sean Tan; Johannes Schöning; Kris Luyten; Karin Coninx
Intelligent User Interfaces can benefit from having knowledge on the user's emotion. However, current implementations to detect affective states, are often constraining the user's freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone's affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.
Indexing cognitive workload based on pupillary response under luminance and emotional changes BIBAFull-Text 247-256
  Weihong Wang; Zhidong Li; Yang Wang; Fang Chen
Pupillary response is a popular physiological index of cognitive workload that can be used for design and evaluation of adaptive interface in various areas of human-computer interaction (HCI) research. However, in practice various confounding factors unrelated to workload, including changes of luminance condition and emotional arousal might degrade pupillary response based workload measures such as commonly used mean pupil diameter. This work investigates pupillary response as a cognitive workload measure under the influence of such confounding factors. Video-based eye tracker is used to record pupillary response during arithmetic tasks under luminance and emotional changes. Machine learning based feature selection and classification techniques are proposed to robustly index cognitive workload based on pupillary response even with the influence of noisy factors unrelated to workload.
Exploring 3d gesture metaphors for interaction with unmanned aerial vehicles BIBAFull-Text 257-266
  Kevin Pfeil; Seng Lee Koh; Joseph LaViola
We present a study exploring upper body 3D spatial interaction metaphors for control and communication with Unmanned Aerial Vehicles (UAV) such as the Parrot AR Drone. We discuss the design and implementation of five interaction techniques using the Microsoft Kinect, based on metaphors inspired by UAVs, to support a variety of flying operations a UAV can perform. Techniques include a first-person interaction metaphor where a user takes a pose like a winged aircraft, a game controller metaphor, where a user's hands mimic the control movements of console joysticks, "proxy" manipulation, where the user imagines manipulating the UAV as if it were in their grasp, and a pointing metaphor in which the user assumes the identity of a monarch and commands the UAV as such. We examine qualitative metrics such as perceived intuition, usability and satisfaction, among others. Our results indicate that novice users appreciate certain 3D spatial techniques over the smartphone application bundled with the AR Drone. We also discuss the trade-offs in the technique design metrics based on results from our study.

Mobile applications

AppFunnel: a framework for usage-centric evaluation of recommender systems that suggest mobile applications BIBAFull-Text 267-276
  Matthias Böhmer; Lyubomir Ganev; Antonio Krüger
Mobile phones have evolved from communication to multi-purpose devices that assist people with applications in various contexts and tasks. The size of the mobile ecosystem is steadily growing and new applications become available every day. This increasing number of applications makes it difficult for end-users to find good applications. Recommender systems suggesting mobile applications are being built to help people to find valuable applications. Since the nature of mobile applications differs from classical items to be recommended (e.g. books, movies, other goods), not only can new approaches for recommendation be developed, but also new paradigms for evaluating performance of recommender systems are advisable. During the lifecycle of mobile applications, different events can be observed that provide insights into users' engagement with particular applications. This gives rise to new approaches for evaluation of recommender systems. In this paper, we present AppFunnel: a framework that allows for usage-centric evaluation considering different stages of application engagement. We present a case study and discuss capabilities for evaluating recommender engines by applying metrics to the AppFunnel.
Making graphic-based authentication secure against smudge attacks BIBAFull-Text 277-286
  Emanuel von Zezschwitz; Anton Koslow; Alexander De Luca; Heinrich Hussmann
Most of today's smartphones and tablet computers feature touchscreens as the main way of interaction. By using these touchscreens, oily residues of the users' fingers, smudge, remain on the device's display. As this smudge can be used to deduce formerly entered data, authentication tokens are jeopardized. Most notably, grid-based authentication methods, like the Android pattern scheme are prone to such attacks.
   Based on a thorough development process using low fidelity and high fidelity prototyping, we designed three graphic-based authentication methods in a way to leave smudge traces, which are not easy to interpret. We present one grid-based and two randomized graphical approaches and report on two user studies that we performed to prove the feasibility of these concepts. The authentication schemes were compared to the widely used Android pattern authentication and analyzed in terms of performance, usability and security. The results indicate that our concepts are significantly more secure against smudge attacks while keeping high input speed.
SmartDCap: semi-automatic capture of higher quality document images from a smartphone BIBAFull-Text 287-296
  Francine Chen; Scott Carter; Laurent Denoue; Jayant Kumar
People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.
Functionality-based clustering using short textual description: helping users to find apps installed on their mobile device BIBAFull-Text 297-306
  David Lavid Ben Lulu; Tsvi Kuflik
In recent years, we have witnessed the incredible popularity and widespread adoption of mobile devices. Millions of Apps are being developed and downloaded by users at an amazing rate. These are multi-feature Apps that address a broad range of needs and functions. Nowadays, every user has dozens of Apps on his mobile device. As time goes on, it becomes more and more difficult simply to find the desired App among those that are installed on the mobile device. In spite of several attempts to address the problem, no good solution for this increasing problem has yet been found. In this paper we suggest the use of unsupervised machine learning for clustering Apps based on their functionality, to allow users to access them easily. The functionality is elicited from their description as retrieved from various App stores and enriched by content from professional blogs. The Apps are clustered and grouped according to their functionality and presented hierarchically to the user in order to facilitate the search on the small screen of the mobile device.
Real-time hand interaction for augmented reality on mobile phones BIBAFull-Text 307-314
  Wendy H. Chun; Tobias Höllerer
Over the past few years, Augmented Reality has become widely popular in the form of smart phone applications, however most smart phone-based AR applications are limited in user interaction and do not support gesture-based direct manipulation of the augmented scene. In this paper, we introduce a new AR interaction methodology, employing users' hands and fingers to interact with the virtual (and possibly physical) objects that appear on the mobile phone screen. The goal of this project was to support different types of interaction (selection, transformation, and fine-grain control of an input value) while keeping the methodology for hand detection as simple as possible to maintain good performance on smart phones. We evaluated our methods in user studies, collecting task performance data and user impressions about this direct way of interacting with augmented scenes through mobile phones.

Keynote address

How mobile disrupts social as we know it BIBAFull-Text 315-316
  Monica S. Lam
Every computer revolution changes our lives dramatically; so will mobile devices. Mobile devices enable billions of people to capture, share, interact, and consume real-time personal media in new and creative ways. In addition, being devices owned by individuals, they can form an autonomous computing fabric that frees us from the domination of existing centralized proprietary social networking services
   This talk presents a system architecture called Musubi (Mobile, Social, and UBIquitous) that combines a novel and natural mobile social experience with a clean architecture that lets users choose different cloud backup services. In addition, Musubi is an app platform that makes it easy to create privacy-honoring social apps. This can open up new markets for social and collaborative apps in fields like education, health and businesses, where centralized proprietary services are inappropriate. A fully working prototype of Musubi is available on both the Android and iPhone app store.

Visualization

User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities BIBAFull-Text 317-328
  Ben Steichen; Giuseppe Carenini; Cristina Conati
Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.
GlueTK: a framework for multi-modal, multi-display human-machine-interaction BIBAFull-Text 329-338
  Florian van de Camp; Rainer Stiefelhagen
As new input modalities allow interaction not only in front of a single display, but enable interaction in the whole room, application developers face new challenges. They have to handle many new input modalities, each with its own interface and requirements for pre-processing, deal with multiple displays, and applications that are distributed across multiple machines. We present glueTK, a framework that abstracts from the complexities of these input modalities, allows the design of interfaces for a wide range of display sizes, and makes the distribution across multiple machines transparent to the developer as well as the user. With an example application we demonstrate the wide range of input modalities glueTK can support and the functionality this enables. GlueTK moves away from the focus on point and touch like input modalities, enabling the design of applications tailored towards interactive rooms instead of the traditional desktop environment.
Optimizing temporal topic segmentation for intelligent text visualization BIBAFull-Text 339-350
  Shimei Pan; Michelle X. Zhou; Yangqiu Song; Weihong Qian; Fei Wang; Shixia Liu
We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.
Visualizing recommendations to support exploration, transparency and controllability BIBAFull-Text 351-362
  Katrien Verbert; Denis Parra; Peter Brusilovsky; Erik Duval
Research on recommender systems has traditionally focused on the development of algorithms to improve accuracy of recommendations. So far, little research has been done to enable user interaction with such systems as a basis to support exploration and control by end users. In this paper, we present our research on the use of information visualization techniques to interact with recommender systems. We investigated how information visualization can improve user understanding of the typically black-box rationale behind recommendations in order to increase their perceived relevance and meaning and to support exploration and user involvement in the recommendation process. Our study has been performed using TalkExplorer, an interactive visualization tool developed for attendees of academic conferences. The results of user studies performed at two conferences allowed us to obtain interesting insights to enhance user interfaces that integrate recommendation technology. More specifically, effectiveness and probability of item selection both increase when users are able to explore and interrelate multiple entities -- i.e. items bookmarked by users, recommendations and tags.
Dynamic text management for see-through wearable and heads-up display systems BIBAFull-Text 363-370
  Jason Orlosky; Kiyoshi Kiyokawa; Haruo Takemura
Reading text safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) or Heads Up Display (HUD) systems in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively.
   This paper introduces a new intelligent text management system that actively manages movement of text in a user's field of view. Research to date lacks a method to migrate user-centric content such as e-mail or text messages throughout a user's environment while mobile. Unlike most current annotation and view management systems, we use camera tracking to find dark, uniform regions along the route on which a user is travelling in real time. We then implement methodology to move text from one viable location to the next to maximize readability. A pilot experiment with 19 participants shows that the text placement of our system is preferred to text in fixed location configurations.

User studies

Team reactions to voiced agent instructions in a pervasive game BIBAFull-Text 371-382
  Stuart Moran; Nadia Pantidi; Khaled Bachour; Joel E. Fischer; Martin Flintham; Tom Rodden; Simon Evans; Simon Johnson
The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited.
   As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.
Recommending energy tariffs and load shifting based on smart household usage profiling BIBAFull-Text 383-394
  Joel E. Fischer; Sarvapali D. Ramchurn; Michael Osborne; Oliver Parson; Trung Dong Huynh; Muddasser Alam; Nadia Pantidi; Stuart Moran; Khaled Bachour; Steve Reece; Enrico Costanza; Tom Rodden; Nicholas R. Jennings
We present a system and study of personalized energy-related recommendation. AgentSwitch utilizes electricity usage data collected from users' households over a period of time to realize a range of smart energy-related recommendations on energy tariffs, load detection and usage shifting. The web service is driven by a third party real-time energy tariff API (uSwitch), an energy data store, a set of algorithms for usage prediction, and appliance-level load disaggregation. We present the system design and user evaluation consisting of interviews and interface walkthroughs. We recruited participants from a previous study during which three months of their household's energy use was recorded to evaluate personalized recommendations in AgentSwitch. Our contributions are a) a systems architecture for personalized energy services; and b) findings from the evaluation that reveal challenges in designing energy-related recommender systems. In response to the challenges we formulate design recommendations to mitigate barriers to switching tariffs, to incentivize load shifting, and to automate energy management.
SIDNIE: scaffolded interviews developed by nurses in education BIBAFull-Text 395-406
  Lauren Cairco Dukes; Toni Bloodworth Pence; Larry F. Hodges; Nancy Meehan; Arlene Johnson
One of the most common clinical education methods for teaching patient interaction skills to nursing students is role-playing established scenarios with their classmates. Unfortunately, this is far from simulating real world experiences that they will soon face, and does not provide the immediate, impartial feedback necessary for interviewing skills development. We present a system for Scaffolded Interviews Developed by Nurses In Education (SIDNIE) that supports baccalaureate nursing education by providing multiple guided interview practice sessions with virtual characters. Our scenario depicts a mother who has brought in her five year old child to the clinic. In this paper we describe our system and report on a preliminary usability evaluation conducted with nursing students.
Helping users with information disclosure decisions: potential for adaptation BIBAFull-Text 407-416
  Bart P. Knijnenburg; Alfred Kobsa
Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be "tracked" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.

Tactile and touch

Vibrobelt: tactile navigation support for cyclists BIBAFull-Text 417-426
  Haska Steltenpohl; Anders Bouwer
Tactile displays can be used without demanding the attention from the human visual system, which makes them attractive for use in wayfinding contexts, where visual attention should be directed at traffic and other information in the environment. To investigate the potential of tactile navigation for cyclists, we have designed and implemented Vibrobelt. This belt, worn around the waist, gives waypoint, distance and endpoint information using directional tactile cues. We evaluated Vibrobelt by comparing it to a visual navigation application. Twenty participants were asked to cycle two routes, each route with a different application. We measured the spatial knowledge acquisition and analyzed the visual focus of the participants. We found that Vibrobelt was successful at guiding all participants to their destinations over an unfamiliar route. Participants using Vibrobelt showed a lower error rate for recognizing images from the route than users of the visual system. Users of the visual system were generally navigating faster, and were better at recalling the route, showing a higher contextual route understanding. The endpoint distance encoding was not always correctly interpreted. Future research will improve Vibrobelt by making a clearer distinction between waypoint and endpoint information, and will test users in more complex navigational situations.
Haptic interface for non-visual steering BIBAFull-Text 427-434
  Burkay Sucu; Eelke Folmer
Glare significantly diminishes visual perception, and is a significant cause of traffic accidents. Existing haptic automotive interfaces typically indicate when and in which direction to steer, but they don't convey how much to steer, as a driver typically determines this using visual feedback. We present a novel haptic interface that relies on an intelligent vehicle position system to indicate when, in which direction and how far to steer, as to facilitate steering without any visual feedback. Our interface may improve driving safety when a driver is temporarily blinded, for example, due to glare or fog. Three user studies were performed, the first study tries to understand driving using visual feedback, the second study evaluates two different haptic encoding mechanisms with no visual feedback present, and a third study evaluates the supplemental effect of haptic feedback when used in conjunction with visual feedback. Studies show this interface to allow for blind steering in small curves and that it can improve a driver's lane keeping ability when combined with visual feedback.
Non-visual skimming on touch-screen devices BIBAFull-Text 435-444
  Faisal Ahmed; Andrii Soviak; Yevgen Borodin; I. V. Ramakrishnan
While reading on touch-screens, sighted users can quickly pan through content, skim it, and pick out bits and pieces of information before deciding to read it more carefully. In contrast, blind users have to rely on the screen reader to narrate the content to them. To go through the text quickly, blind users employ gestures that direct the screen reader to skip to the next line or the next paragraph. However, the serial audio interface of the screen reader makes it difficult for blind users to get a sense of what is important before listening to, at least, a part of the content. This makes ad hoc skimming with gestures slow and ineffective. We address this problem in this paper; specifically we propose a non-visual skimming interface that enables blind users to control the amount of content with simple pinch-in and pinch-out gestures. This interface simulates the skimming experience enjoyed by sighted people, and enables blind users to listen to the gist of content, while controlling the speed of information intake. We report on a user study demonstrating that the proposed interface significantly outperforms ad hoc skimming techniques employed by blind users. Our results suggest that the proposed approach holds promise in empowering blind users to access digitized information much faster.
Multi-tap sliders: advancing touch interaction for parameter adjustment BIBAFull-Text 445-452
  Sashikanth Damaraju; Jinsil Hwaryoung Seo; Tracy Hammond; Andruid Kerne
Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been create to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. However, as we attempt to design for more complex operations, the expectation of spatial manipulation becomes infeasible.
   We introduce Multi-tap Sliders for operation in what we call abstract parametric spaces that do not have an obvious literal spatial representation, such as exposure, brightness, contrast and saturation for image editing. This new widget design promotes multi-touch interaction for prolonged use in scenarios that require adjustment of multiple parameters as part of an operation. The multi-tap sliders encourage the user to keep her visual focus on the target, instead of the requiring to look back at the interface.
   Our research emphasizes ergonomics, clear visual design, and fluid transition between the selection of parameters and their subsequent adjustment for a given operation. We demonstrate a new technique for quickly selecting and adjusting multiple numerical parameters. A preliminary user study points out improvements over the traditional sliders.

IUI 2013-03-19 Volume 2

Student consortium

Designing context-aware display ecosystems BIBAFull-Text 1-4
  Jakub Dostal
Display ecosystems encapsulate a number of independent and/or interconnected displays and their users. We are seeing the emergence of more complex display ecosystems, whether it is due to the number of collaborators, the number of devices or displays, the complexity of the user interface or increased information flow or a combination of all of all these factors. I hypothesise that interactions within display ecologies would benefit from awareness of the computational, physiological and environmental context. This paper presents a brief overview of related work as well as the research goals, relevant methodology and current research status.
Measuring situation awareness of micro-neurosurgeons BIBAFull-Text 5-8
  Shahram Eivazi
Micro-neurosurgery is performed with high power microscope. The microscope provides a precise perception of operative neurosurgical anatomy. The surgery is conducted using miniaturized instruments through a small hole in the skull and spin channel. Although there are predefined routine procedures in the micro-neurosurgery, still surgeons have to maintain a high level of situation awareness (SA) to operate safely. In this paper I discuss about my PhD research that focuses on relationship between eye movement pattern and level of SA.
Multi-modal context-awareness for ambient intelligence environments BIBAFull-Text 9-12
  Georgios Galatas; Fillia Makedon
Context-awareness constitutes a fundamental attribute of a smart environment. Our research aims at advancing the context-awareness capabilities of ambient intelligence environments by combining multi-modal information from both stationary and moving sensors. The collected data enables us to perform person identification and 3-D localization and recognize activities. In addition, we explore closed-loop feedback by integrating autonomous robots interacting with the users.
Real-time classification of dynamic hand gestures from marker-based position data BIBAFull-Text 13-16
  Andrew Gardner; Christian A. Duncan; Rastko Selmic; Jinko Kanno
In this paper we describe plans for a dynamic hand gesture recognition system based on motion capture cameras with unlabeled markers. The intended classifier is an extension of previous work on static hand gesture recognition in the same environment. The static gestures are to form the basis of a vocabulary that will allow precise descriptions of various expressive hand gestures when combined with inferred motion and temporal data. Hidden Markov Models and dynamic time warping are expected to be useful tools in achieving this goal.
Leveraging the crowd for creating wireframe-based exploration of mobile design pattern gallery BIBAFull-Text 17-20
  Yi-Ching Huang; Chun-I Wang; Jane Hsu
The use of mobile device is becoming more popular than before. There are many designers entering the area of mobile app design and development. However, due to the limitation of the screen size and context of use, it becomes extremely difficult for non-experienced designers or developers to make a good mobile application design. This paper introduces a wireframe-based matching technique to help people seek relevant mobile UI design examples for inspirations. We leveraged the wisdom of the crowd for creating coherent mappings between wireframes and design examples. Furthermore, we constructed a mobile UI design gallery for designers to explore inspiring examples in the wireframing stage.
Adaptive game for reducing aggressive behavior BIBAFull-Text 21-24
  Juan F. Mancilla-Caceres; Eyal Amir; Dorothy Espelage
Peer influence in social networks has long been recognized as one of the key factors in many of the social health issues that affect young people. In order to study peer networks, scientists have relied on the use of self-report surveys that impose limitations on the types of issues than can be studied. On the other hand, the ever increasing use of computers for communication has given rise to new ways of studying group dynamics and, even more importantly, it has enabled a new way to affect those dynamics as they are detected. Our work is focused on designing and analyzing computer social games that can be used as data collection tools for social interactions, and that can also react and change accordingly in order to promote prosocial, rather than aggressive, behavior.
User interface adaptation based on user feedback and machine learning BIBAFull-Text 25-28
  Nesrine Mezhoudi
With the growing need for intelligent software, exploring the potential of Machine Learning (ML) algorithms for User Interface (UI) adaptation becomes an ultimate requirement. The work reported in this paper aims at enhancing the UI interaction by using a Rule Management Engine (RME) in order to handle a training phase for personalization. This phase is intended to teach to the system novel adaptation strategies based on the end-user feedback concerning his interaction (history, preferences...). The goal is also to ensure an adaptation learning by capitalizing on the user feedbacks via a promoting/demoting technique, and then to employ it later in different levels of the UI development.
Towards adaptive dialogue systems for assistive living environments BIBAFull-Text 29-32
  Alexandros Papangelis; Vangelis Karkaletsis; Heng Huang
Adaptive Dialogue Systems can be seen as smart interfaces that typically use natural language (spoken or written) as a means of communication. They are being used in many applications, such as customer service, in-car interfaces, even in rehabilitation, and therefore it is essential that these systems are robust, scalable and quickly adaptable in order to cope with changing user or system needs or environmental conditions. Making Dialogue Systems adaptive means overcoming several challenges, such as scalability or lack of training data. Achieving adaptation online has thus been an even greater challenge. We propose to build such a system, that will operate in an Assistive Living Environment and provide its services as a coach to patients that need to perform rehabilitative exercises. We are currently in the process of developing it, using Robot Operating System on a robotic platform.
From small screens to big displays: understanding interaction in multi-display environments BIBAFull-Text 33-36
  Teddy Seyed; Chris Burns; Mario Costa Sousa; Frank Maurer
Devices such as tablets, mobile phones, tabletops and wall displays all incorporate different sizes of screens, and are now commonplace in a variety of situations and environments. Environments that incorporate these devices, multi-display environments (MDEs) are highly interactive and innovative, but the interaction in these environments is not well understood. The research presented here investigates and explores interaction and users in MDEs. This exploration tries to understand the conceptual models of MDEs for users and then examine and validate interaction approaches that can be done to make them more usable. In addition to a brief literature review, the methodology, research goals and current research status are presented.
Computational approaches to visual attention for interaction inference BIBAFull-Text 37-40
  Hana Vrzakova
Many aspects of interaction are hard to directly observe and measure. My research focuses on particular aspects of UX such as cognitive workload, problem solving or engagement, and establishes computational links between them and visual attention. Using machine learning and pattern recognition techniques, I aim to achieve automatic inferences for HCI and employ them as enhancements in gaze-aware interfaces.

Demonstrations

Deploying speech interfaces to the masses BIBAFull-Text 41-42
  Aasish Pappu; Alexander Rudnicky
Speech systems are typically deployed either over phones, e.g. IVR agents, or on embodied agents, e.g. domestic robots. Most of these systems are limited to a particular platform i.e., only accessible by phone or in situated interactions. This limits scalability and potential domain of operation. Our goal is to make speech interfaces more widely available, and we are proposing a new approach for deploying such interfaces on the internet along with traditional platforms. In this work, we describe a lightweight speech interface architecture built on top of Freeswitch, an open source softswitch platform. A softswitch enables us to provide users with access over several types of channels (phone, VOIP, etc.) as well as support multiple users at the same time. We demonstrate two dialog applications developed using this approach: 1) Virtual Chauffeur: a voice based virtual driving experience and 2) Talkie: a speech-based chat bot.
Real-time direct manipulation of screen-based videos BIBAFull-Text 43-44
  Laurent Denoue; Scott Carter; Matthew Cooper; John Adcock
We describe direct video manipulation interactions applied to screen-based tutorials. In addition to using the video timeline, users of our system can quickly navigate into the video by mouse-wheel, double click over a rectangular region to zoom in and out, or drag a box over the video canvas to select text and scrub the video until the end of a text line even if not shown in the current frame. We describe the video processing techniques developed to implement these direct video manipulation techniques, and show how they are implemented to run in most modern web browsers using HTML5's CANVAS and Javascript.
PersonalityViz: a visualization tool to analyze people's personality with social media BIBAFull-Text 45-46
  Liang Gou; Jalal Mahmud; Eben Haber; Michelle Zhou
This paper presents an interactive visualization tool, PersonalityViz, to help people understand their personality traits derived from social media. The system uses the Linguistic Inquiry and Word Count (LIWC) text analysis tool and LIWC/Big Five personality correlations to compute a person's Big Five personality from one's tweets. It provides an interactive visual interface that allows a user to explore her personality traits over time, and examine the visual evidence to understand how the personality traits are derived from the relevant tweets.
E3-player: emotional excitement enhancing video player using skin conductance response BIBAFull-Text 47-48
  Takumi Shirokura; Nagisa Munekata; Tetsuo Ono
We developed E3-player, a novel video player with three operation modes which enhances video experiences by using a user's physiological inputs. This system's purpose is to enhance the emotional excitement of the users by reinforcing their response to the videos they are watching. Users who use the E3-player need only to attach a physiological sensor to their own hand and wearing noise-cancelling headphones. Through our experiments, we ensure that the E3-player can indeed enhance video experience and provide new video experiences for viewers.
MoFIS: a mobile user interface for semi-automatic extraction of food product ingredient lists BIBAFull-Text 49-50
  Tobias Leidinger; Lübomira Spassova; Andreas Arens; Norbert Rösch
The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.
Keeping wiki content current via news sources BIBAFull-Text 51-52
  Rachel Adams; Alex Kuntz; Morgan Marks; William Martin; David Musicant
Online resources known as wikis are commonly used for collection and distribution of information. We present a software implementation that assists wiki contributors with the task of keeping a wiki current. Our demonstration, built using English Wikipedia, enables wiki contributors to subscribe to sources of news, based on which it makes intelligent recommendations for pages within Wikipedia where the new content should be added. This tool is also potentially useful for helping new Wikipedia editors find material to contribute.
Interactive design of planar curves based on spatial augmented reality BIBAFull-Text 53-54
  Ahyun Lee; Jeong Dae Suh; Joo-Haeng Lee
In this paper, we introduce an interactive application for planar curve design in a real world based on spatial augmented reality (SAR). The key component is a projector-camera unit that recognizes physical control objects (i.e., key points of an intended curve) using a camera and displays a design result (i.e., a B-spline curve) directly on the real world surface using a projector. Usually, geometric design is performed with the aid of CAD software and traditional user interfaces of a computer system. The main contribution of this paper is application of spatial augmented reality techniques in the domain of computer-aided geometric design (CAGD) for more tangible and intuitive interaction in a real world. We describe the feature of the prototype system and demonstrate the working application with examples.
Namelette: a tasteful supporter for creative naming BIBAFull-Text 55-56
  Güzde Özbal; Carlo Strapparava
In this paper, we introduce a system that supports the naming process by exploiting natural language processing and linguistic creativity techniques in a completely unsupervised fashion. The system generates two types of neologisms based on the category of the service to be named and the properties to be underlined. While the first type consists of homophonic puns and metaphors, the second consists of neologisms that are produced by adding Latin suffixes to English words or homophonic puns. During this process, both semantic appropriateness and sound pleasantness of the generated names are taken into account.
A system for facial expression-based affective speech translation BIBAFull-Text 57-58
  Zeeshan Ahmed; Ingmar Steiner; Éva Székely; Julie Carson-Berndsen
In the emerging field of speech-to-speech translation, emphasis is currently placed on the linguistic content, while the significance of paralinguistic information conveyed by facial expression or tone of voice is typically neglected. We present a prototype system for multimodal speech-to-speech translation that is able to automatically recognize and translate spoken utterances from one language into another, with the output rendered by a speech synthesis system. The novelty of our system lies in the technique of generating the synthetic speech output in one of several expressive styles that is automatically determined using a camera to analyze the user's facial expression during speech.
SKIMMR: machine-aided skim-reading BIBAFull-Text 59-60
  Vít Novácek; Gully Burns
Unlike full reading, 'skim-reading' involves the process of looking quickly over information in an attempt to cover more material whilst still being able to retain a superficial view of the underlying content. Within this work, we specifically emulate this natural human activity by providing a dynamic graph-based view of entities automatically extracted from text using superficial text parsing / processing techniques. We provide a preliminary web-based tool (called 'SKIMMR') that generates a network of inter-related concepts from a set of documents. In SKIMMR, a user may browse the network to investigate the lexically-driven information space extracted from the documents. When a particular area of that space looks interesting to a user, the tool can then display the documents that are most relevant to the displayed concepts. We present this as a simple, viable methodology for browsing a document collection (such as a collection scientific research articles) in an attempt to limit the information overload of examining that document collection. This paper presents a motivation and overview of the approach, outlines technical details of the preliminary SKIMMR implementation, describes the tool from the user's perspective and summarises the related work.
SciNet: a system for browsing scientific literature through keyword manipulation BIBAFull-Text 61-62
  Dorota Glowacka; Tuukka Ruotsalo; Ksenia Konyushkova; Kumaripaba Athukorala; Samuel Kaski; Giulio Jacucci
Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present SciNet, an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search.
A multimodal dialogue interface for mobile local search BIBAFull-Text 63-64
  Patrick Ehlen; Michael Johnston
Speak4itSM uses a multimodal interface to perform mobile search for local businesses. Users combine simultaneous speech and touch to input queries or commands, for example, by saying, "gas stations", while tracing a route on a touchscreen. This demonstration will exhibit an extension of our multimodal semantic processing architecture from a one-shot query system to a multimodal dialogue system that tracks dialogue state over multiple turns and resolves prior context using unification-based context resolution. We illustrate the capabilities and limitations of this approach to multimodal interpretation, describing the challenges of supporting true multimodal interaction in a deployed mobile service, while offering an interactive demonstration on tablets and smartphones.
Teaching agents with human feedback: a demonstration of the TAMER framework BIBAFull-Text 65-66
  W. Bradley Knox; Peter Stone; Cynthia Breazeal
Incorporating human interaction into agent learning yields two crucial benefits. First, human knowledge can greatly improve the speed and final result of learning compared to pure trial-and-error approaches like reinforcement learning. And second, human users are empowered to designate "correct" behavior. In this abstract, we present research on a system for learning from human interaction -- the TAMER framework -- then point to extensions to TAMER, and finally describe a demonstration of these systems.
An intelligent web-based interface for programming content detection in q&a forums BIBAFull-Text 67-68
  Mahdy Khayyamian; Jihie Kim
In this demonstration, we introduce a novel web-based intelligent interface which automatically detects and highlights programming content (programming code and messages) in Q&A programming forums. We expect our interface helps enhancing visual presentation of such forum content and enhance effective participation.
   We solve this problem using several alternative approaches: a dictionary-based baseline method, a non-sequential Naïve Bayes classification algorithm, and Conditional Random Fields (CRF) which is a sequential labeling framework. The best results are produced by CRF method with an F1-Score of 86.9%.
   We also experimentally validate how robust our classifier is by testing the constructed CRF model built on a C++ forum against a Python and a Java dataset. The results indicate the classifier works quite well across different domains.
   To demonstrate detection results, a web-based graphical user interface is developed that accepts a user input programming forum message and processes it using trained CRF model and then displays the programming content snippets in a different font to the user.

Posters

Improving rich internet application development using patterns BIBAFull-Text 69-70
  Jalal Mahmud
With changes of customer requirements, web development, especially developing Rich Internet Applications (RIA) with complex widgets and data-driven behavior can be a time-consuming task. In our previous work [3], we have presented a test-driven web development approach using ClearScript test cases as requirements to automatically generate widgets, and thus reduce the barrier of web development and testing. We extend on this work, and develop a machine learning based algorithm to identify RIA patterns [1] from requirements specified as test cases, and automatically instantiate them using simple rules. We also present performance of our algorithm and a user study which demonstrates the viability of our approach.
Adaptable probabilistic flick keyboard based on HMMs BIBAFull-Text 71-72
  Toshiyuki Hagiya; Tsuneo Kato
To provide an accurate and user-adaptable software keyboard for touchscreens, we propose a probabilistic flick keyboard based on HMMs. This keyboard can reduce the input error by taking the time series of the actual touch position into consideration and by user adaptation. We evaluated performance of the HMM-based flick keyboard and MLLR adaptation. Experimental results showed that a user-dependent model reduced the error rate by 28.2%. In a practical setting, MLLR user adaptation with only 10 words reduced the error rate by 16.5% and increased typing speed by 10.5%.
VibroTactor: low-cost placement-aware technique using vibration echoes on mobile devices BIBAFull-Text 73-74
  Sungjae Hwang; Kwangyun Wohn
In this paper, we present a low-cost placement-aware technique, called VibroTactor, which allows mobile devices to determine where they are placed (e.g., in a pocket, on a phone holder, on the bed, or on the desk). This is achieved by filtering and analyzing the acoustic signal generated when the mobile device vibrates. The advantage of this technique is that it is inexpensive and easy to deploy because it uses a microphone, which already embedded in standard mobile devices. To verify this idea, we implemented a prototype and conducted a preliminary test. The results show that this system achieves an average success rate of 91% in 12 different real-world placement sets.
Magnetic marionette: magnetically driven elastic controller on mobile device BIBAFull-Text 75-76
  Sungjae Hwang; Myungwook Ahn; Kwangyun Wohn
In this paper, we present the Magnetic Marionette, a magnetically driven elastic controller that enables tangible interaction on mobile devices. This technique can determine eight different gestures in excess of 99% accuracy by sensing and tracking the magnets embedded on the controller. The advantage of this technique is that it is lightweight, battery-free, and inexpensive because it uses a magnetometer, which is already embedded in smart phones today. This simple and noble technique allows users to achieve richer tactile feedback, expand their interaction area, and enhance expressiveness without the need for hardware modification.
Extracting document relationships by analyzing user's activity history BIBAFull-Text 77-78
  Akira Karasudani; Satoshi Iwata; Tatsuro Matsumoto; Hirokazu Aritake
In order to reduce people's workload of looking for valuable information from a large amount of available information, recommendation systems, task management systems, and so on are attracting considerable attention. Such systems are expected to allow easy access to information under the current work context. In the development of these systems, how to handle a user's current work context to know what information he wants to use is a key point. We have developed a novel method to extract relationships among documents as the work context by analyzing the user's activity history according to several viewpoints of a human being who memorizes and seeks information. We report the details of the proposed method, the evaluation result, and an application example.
Eye corner detector robust to shape and illumination changes BIBAFull-Text 79-80
  Sébastien Picard; Shanshan Yu; Satoshi Nakashima
We introduce a robust and accurate eye corner detector. The purpose of this technology is to improve the robustness of natural user interfaces' inputs, such as eye movements, in real-life environments for various users. Our technology relies on the pupil centers and particularly the inter-pupil direction, to define simple features capturing the structural characteristics of the eye corners independently of the shape and appearance of their neighborhoods. Our method proved to be robust to different lighting environments, eye shapes and different patterns of reflection on glasses. We report an error below 2% of the inter-pupil distance for 98% of the images, and maintain processing time below 3ms for computation on both eyes.
WhoAmi: profiling attendees before meetings BIBAFull-Text 81-82
  Jianqiang Shen; Oliver Brdiczka; Juan Liu; Masafumi Suzuki
Mobile phones have been widely used to access, retrieve and organize information. In this paper, we present the meshin system, an intelligent personal assistant that organizes messages, notifications and appointments for mobile phone users. To help users prepare for meetings, meshin automatically creates a profile for each meeting attendee. The meshin system searches the Internet with the attendee's name and the domain of the email address, then retrieves and aggregates information for this attendee. To get a better understanding of the attendee, it further estimates the personality profile based on the emails he/she wrote. Our experimental results show that our system can predict personality with reasonable accuracies (95%).
Accelerometer-based hand gesture recognition using feature weighted naïve Bayesian classifiers and dynamic time warping BIBAFull-Text 83-84
  David Mace; Wei Gao; Ayse Coskun
Accelerometer-based gesture recognition is a major area of interest in human-computer interaction. In this paper, we compare two approaches: naïve Bayesian classification with feature separability weighting [1] and dynamic time warping [2]. Algorithms based on these two approaches are introduced and the results are compared. We evaluate both algorithms with four gesture types and five samples from five different people. The gesture identification accuracy for Bayesian classification and dynamic time warping are 97% and 95%, respectively.
Ghost-hunting: a cursor-based pointing technique with picture guide indication of the shortest path BIBAFull-Text 85-86
  Chihiro Kuwabara; Keiko Yamamoto; Itaru Kuramoto; Yoshihiro Tsujino; Mitsuru Minakuchi
Ghost-Hunting (GH) is a new technique that improves pointing performance in a graphical user interface (GUI) by expanding targets to facilitate easier access. In GH, the effect of decreasing the movement distance of a cursor by expanding the size of onscreen targets is utilized to improve the GUI. GH shows the guides of the end point of the shortest movement path, called ghosts, inside expanded target areas. Users can optimize their cursor movements by only moving their cursor towards the ghosts in GH, unlike other techniques that use the invisible outline of an expanded target such as with Bubble Cursor. We conduct an experimental evaluation to clarify the effectiveness of GH in menu-item selection tasks. The result shows that GH's selection time was significantly faster than that of the ordinal cursor or Bubble Cursor. In particular, GH is faster than Bubble Cursor in environments with a high density of targets.
Android-based speech processing for eldercare robotics BIBAFull-Text 87-88
  Tatiana Alexenko; Megan Biondo; Deya Banisakher; Marjorie Skubic
A growing elderly population has created a need for innovative eldercare technologies. The use of a home robot to assist with daily tasks is one such example. In this paper we describe an interface for human-robot interaction, which uses built-in speech recognition in Android phones to control a mobile robot. We discuss benefits of using a smartphone for speech-based robot control and present speech recognition accuracy results for younger and older adults obtained with an Android smartphone.
Finding the local angle in national news BIBAFull-Text 89-90
  Shawn O'Banion; Larry Birnbaum; Scott Bradley
Journalists often localize news stories that are not explicitly about the community they serve by investigating and describing how those stories affect that community. This is, in essence, a form of personalization based not on a reader's personal interests, but rather on their ties to a geographic location. In this paper we present The Local Angle, an approach for automating the process of finding national and international news stories that are locally relevant. The Local Angle associates the people, companies, and organizations mentioned in news stories with geographic locations using semantic analysis tools and online knowledge bases. We describe the design and implementation of our prototype system that helps content curators and consumers discover articles that are of local interest even if they do not originate locally.
Computerized adaptive testing and learning using Bayesian network BIBAFull-Text 91-92
  Kyung Soo Kim; Yong Suk Choi
In this paper, we propose a novel CAT (Computerized Adaptive Testing) system based on Bayesian network. Our novel system makes good use of topology and probabilistic inference algorithm of Bayesian network to efficiently estimate proficiency of learner and also give an adaptive learning guide when needed. From several experiments, we identified that our system could considerably improve proficiency-estimation performance when compared with conventional CAT methods.
A visual monitoring and management tool for smart environments BIBAFull-Text 93-94
  Gerrit Kahl
Smart spaces are equipped with a large number of sensors and actuators. Data measured by the sensors is sent to respective services, which can react on them and control actuators correspondingly. In order to improve the services and make them smarter, they can communicate with each other and exchange data. The more interferences between the services exist, the more complex is the monitoring and extension of such smart spaces. In this paper, we propose a monitoring tool for these environments, which consists of a real and a virtual component. The real component is the physical smart space itself and the virtual one consists of a three-dimensional model of the space. Sensor information is transmitted from the real to the virtual component and represented there, via an appropriate visualization. Additionally, the tool offers a communication channel from the virtual component to the real counterpart to control the physical actuators.
Semantic search for smart mobile devices BIBAFull-Text 95-96
  Sangjin Shin; Jihoon Ko; Dong-Hoon Shin; Jooik Jung; Kyong-Ho Lee
To enhance the incorrectness of keyword based search, we propose an efficient semantic search method based on a lightweight mobile ontology designed for smart mobile devices. In addition, we implement a prototype of semantic search engine working on Android smartphones and our prototype engine provides better user experience compared with keyword based search.
Multimodal interaction strategies in a multi-device environment around natural speech BIBAFull-Text 97-98
  Christian Schulz; Daniel Sonntag; Markus Weber; Takumi Toyama
In this paper we present an intelligent user interface which combines a speech-based interface with several other input modalities. The integration of multiple devices into a working environment should provide greater flexibility to the daily routine of medical experts for example. To this end, we will introduce a medical cyber-physical system that demonstrates the use of a bidirectional connection between a speech-based interface and a head-mounted see-through display. We will show examples of how we can exploit multiple input modalities and thus increase the usability of a speech-based interaction system.
Ranking in information streams BIBAFull-Text 99-100
  Steven Bourke; Michael O'Mahony; Rachael Rafter; Barry Smyth
Information streams allow social network users to receive and interact with the latest messages from friends and followers. But as our social graphs grow and mature it becomes increasingly difficult to deal with the information overload that these realtime streams introduce. Some social networks, like Facebook, use proprietary interestingness metrics to rank messages in an effort to improve stream relevance and drive engagement. In this paper we evaluate learning to rank approaches to rank content based on a variety of features taken from live-user data.
HAPPIcom: haptic pad for impressive text communication BIBAFull-Text 101-102
  Ayano Tamura; Shogo Okada; Katsumi Nitta; Tetsuya Harada; Makoto Sato
We propose a system called Haptic Pad for Impressive Text Communication for creating text messages with haptic stimuli using the SPIDAR-tablet haptic interface. This system helps users indicate emotion in text messages and actions of characters in storytelling by attaching physical feedback to words in text. We evaluated the effectiveness of the system experimentally in two scenarios: storytelling and text messaging. We found that effective use of haptic stimuli depends on each situation and participant.
PhotoAct: act on photo taking BIBAFull-Text 103-104
  Shuguang Wu; Jun Xiao; Ken Reily
In many commercial environments understanding the user's intention can lead to more engaging and intelligent user interactions. We looked at theme park photo kiosks where many people use their camera phones to capture their ride photos on preview displays. We believe that by identifying people with photo-taking intention and engaging them through intelligent UI can help reduce the instances of people opting for low quality but free screen capture. We built a prototype system called PhotoAct, using depth camera to recognize human postures and in real time infer people's photo-taking intentions. In this paper, we describe the system components, the detection algorithm, and present preliminary lab study results.
An affective evaluation tool using brain signals BIBAFull-Text 105-106
  Manolis Perakakis; Alexandros Potamianos
We propose a new interface evaluation tool that incorporates affective metrics which are provided from the ElectroEncephaloGraphy (EEG) signals of the Emotiv EPOC neuro-headset device. The evaluation tool captures and analyzes information in real time from a multitude of sources such as EEG, affective metrics such as frustration, engagement and excitement and facial expression. The proposed tool has been used to gain detailed affective information of users interacting with a mobile multimodal (touch and speech) iPhone application, for which we investigated the effect of speech recognition errors and modality usage patterns.
A privacy-aware shopping scenario BIBAFull-Text 107-108
  Gerrit Kahl; Denise Paradowski
Providing private data is a highly controversial and widely debated topic. Not only the information about individuals but also about companies should be kept private. In order to satisfy the needs of both individuals and companies, corresponding privacy protection mechanisms have to be implemented. For example, systems which assist customers during their shopping process in a physical retail store require customer related information, such as the shopping list, allergy or bank account information as well as data from the retailer, like the product range and prices. In this paper, we introduce a concept for decoupling both information sources implemented in a shopping scenario, which amongst others allows Mobile Payment without the transmission of private data. The implemented prototype has been presented at a large fair to potential users in order to receive valuable feedback.
An initial analysis of semantic wikis BIBAFull-Text 109-110
  Yolanda Gil; Angela Knight; Kevin Zhang; Larry Zhang; Ricky Sethi
Semantic wikis augment wikis with semantic properties that can be used to aggregate and query data through reasoning. Semantic wikis are used by many communities, for widely varying purposes such as organizing genomic knowledge, coding software, and tracking environmental data. Although wikis have been analyzed extensively, there has been no published analysis of the use of semantic wikis. We carried out an initial analysis of twenty semantic wikis selected for their diverse characteristics and content. Based on the number of property edits per contributor, we identified several patterns to characterize community behaviors that are common to groups of wikis.
An affordable real-time assessment system for surgical skill training BIBAFull-Text 111-112
  Gazi Islam; Baoxin Li; Kanav Kahol
This research proposes a novel computer-vision-based approach for skill assessment by observing a surgeon's hand and surgical tool movements in minimally invasive surgical training, which can be extended to the evaluation in real surgeries. Videos capturing the surgical field are analyzed using a system composed of a series of computer vision algorithms. The system automatically detects major skill measuring features from surgical task videos and provides real-time performance feedback on objective and quantitative measurement of surgical skills.

Workshops

SmartObjects: second workshop on interacting with smart objects BIBAFull-Text 113-114
  Dirk Schnelle-Walka; Jochen Huber; Roman Lissermann; Oliver Brdiczka; Kris Luyten; Max Mühlhäuser
Smart objects are everyday objects that have computing capabilities and give rise to new ways of interaction with our environment. The increasing number of smart objects in our life shapes how we interact beyond the desktop. In this workshop we explore various aspects of the design, development and deployment of smart objects including how one can interact with smart objects.
IUI 2013 3rd workshop on location awareness for mixed and dual reality: (LAMDa'13) BIBAFull-Text 115-118
  Tim Schwartz; Gerrit Kahl; Sally A. Applin; Eyal Dim
This workshop explores the interactions between location awareness and Dual/Mixed/PolySocial Reality in smart (instrumented) environments and their impact on culture and society. The main scope of this workshop is to explore how a Dual/Mixed/PolySocial Reality paradigm can be used to improve applications in smart environments and, by extension, which new possibilities can be opened up by these paradigms.
   These may include positioning methods and location-based services using the DR paradigm, such as navigation services and group interaction services (location-based social signal processing) as well as agent based intermediaries to offset errant voluminous multiplexed communication messaging. The workshop is also open to discuss sensor and actuator technologies that are being developed to foster the growth of interaction possibilities in smart environments.
3rd International workshop on intelligent user interfaces for developing regions: IUI4DR BIBAFull-Text 119-120
  Sheetal Agarwal; Nitendra Rajput; Neesha Kodagoda; B. L. William Wong; Sharon Oviatt
Information Technology (IT) has had significant impact on the society and has touched all aspects of our lives. Up and until now computers and expensive devices have fueled this growth. It has resulted in several benefits to the society. The challenge now is to take this success to its next level where IT services can be accessed by users in developing regions.
   The first IUI4DR workshop was held at IUI 2008. This workshop focused on low cost interfaces, interfaces for illiterate people and on exploring different input mechanisms. The second workshop held at IUI 2011 focused on multimodal applications and collaborative interfaces in particular to aid effective navigation of content and access to services.
   So far we have concentrated on mobile devices as the primary method for people to access content and services. In particular we focused on low-end feature phones that are widely used. However the smart phone market is booming even in developing countries with touch phones available for as little as 50 USD. We want to explore how devices such as smart TVs, smart phones, and old desktop machines, radios, etc. can be used to provide novel interaction methods and interfaces for the low literate populations. We would also like to continue our focus on interaction modalities other than speech such as gestures, haptic inputs and touch interfaces.
IUI workshop on interactive machine learning BIBAFull-Text 121-124
  Saleema Amershi; Maya Cakmak; W. Bradley Knox; Todd Kulesza; Tessa Lau
Many applications of Machine Learning (ML) involve interactions with humans. Humans may provide input to a learning algorithm (in the form of labels, demonstrations, corrections, rankings or evaluations) while observing its outputs (in the form of feedback, predictions or executions). Although humans are an integral part of the learning process, traditional ML systems used in these applications are agnostic to the fact that inputs/outputs are from/for humans.
   However, a growing community of researchers at the intersection of ML and human-computer interaction are making interaction with humans a central part of developing ML systems. These efforts include applying interaction design principles to ML systems, using human-subject testing to evaluate ML systems and inspire new methods, and changing the input and output channels of ML systems to better leverage human capabilities. With this Interactive Machine Learning (IML) workshop at IUI 2013 we aim to bring this community together to share ideas, get up-to-date on recent advances, progress towards a common framework and terminology for the field, and discuss the open questions and challenges of IML.