HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 080910111213-113-214-114-215-115-216-116-2

Companion Proceedings of the 2015 International Conference on Intelligent User Interfaces

Fullname:Companion Proceedings of the 20th International Conference on Intelligent User Interfaces
Editors:Oliver Brdiczka; Polo Chau; Giuseppe Carenini; Shimei Pan; Per Ola Kristensson
Location:Atlanta, Georgia
Dates:2015-Mar-29 to 2015-Apr-01
Volume:2
Publisher:ACM
Standard No:ISBN: 978-1-4503-3308-5; ACM DL: Table of Contents; hcibib: IUI15-2
Papers:36
Pages:148
Links:Conference Website
  1. IUI 2015-03-29 Volume 2
    1. Poster & Demo Session
    2. Student Consortium

IUI 2015-03-29 Volume 2

Poster & Demo Session

Automatic Generation and Insertion of Assessment Items in Online Video Courses BIBAFull-Text 1-4
  Amrith Krishna; Plaban Bhowmick; Krishnendu Ghosh; Archana Sahu; Subhayan Roy
In this paper, we propose a prototype system for automatic generation and insertion of assessment items in online video courses. The proposed system analyzes text transcript of a requested video lecture to suggest self-assessment items in runtime through automatic discourse segmentation and question generation. To deal with the problem of question generation from noisy transcription, the system relies on semantically similar Wikipedia text segments. We base our study on a popular video lecture portal -- National Programme on Technology Enhanced Learning (NPTEL). However, it can be adapted to other portals as well.
The News Context Project BIBAFull-Text 5-8
  Larry Birnbaum; Miriam Boon; Scott Bradley; Jennifer Wilson
We describe intelligent information technologies designed to automatically provide both journalists and ordinary newsreaders with a broad range of the contextual information they need in order to better understand news stories, presented in an immediate and compelling fashion. These systems automatically identify, select, and present appropriate contextual information based on the story a user is currently viewing. Our experiences in building a number of specific systems of this kind have led to the creation of a general architecture and platform for developing such applications. These systems interact with news consumers directly through mechanisms such as browser extensions.
MuLES: An Open Source EEG Acquisition and Streaming Server for Quick and Simple Prototyping and Recording BIBAFull-Text 9-12
  Raymundo Cassani; Hubert Banville; Tiago H. Falk
The past few years have seen the availability of consumer electroencephalography (EEG) devices increase significantly. These devices, usually with a compact form factor and affordable price, now allow researchers and enthusiasts to use EEG in various new contexts and environments. However, the many consumer headsets often require extensive programming experience to interface with their respective drivers; moreover, standardization of the access and recording of EEG data across the devices still remains to be done. Consequently, a tool is needed to facilitate the recording and streaming of EEG data from consumer headsets that can easily be interfaced with different programming languages and software, and that allows interchangeability between devices. This paper describes the open source MuSAE Lab EEG Server (MuLES), an EEG acquisition and streaming server that aims at creating a standard interface for portable EEG headsets, in order to accelerate the development of brain-computer interfaces (BCIs) and of general EEG use in novel contexts. In addition to the EEG server interface which currently supports five different consumer devices and session playback, clients are developed for use on different platforms and in various programming languages, making prototyping and recording a quick and simple task. To validate the functionality and usability of the EEG server, a use case highlighting its main features is presented. The developed tool simplifies the acquisition and recording of EEG data from portable consumer devices by providing a single efficient interface, with applications in areas such as basic and behavioural research, prototyping, neurogaming, live performance and arts.
Towards Integrating Real-Time Crowd Advice with Reinforcement Learning BIBAFull-Text 17-20
  Gabriel V. de la Cruz; Bei Peng; Walter S. Lasecki; Matthew E. Taylor
Reinforcement learning is a powerful machine learning paradigm that allows agents to autonomously learn to maximize a scalar reward. However, it often suffers from poor initial performance and long learning times. This paper discusses how collecting on-line human feedback, both in real time and post hoc, can potentially improve the performance of such learning systems. We use the game Pac-Man to simulate a navigation setting and show that workers are able to accurately identify both when a sub-optimal action is executed, and what action should have been performed instead. Demonstrating that the crowd is capable of generating this input, and discussing the types of errors that occur, serves as a critical first step in designing systems that use this real-time feedback to improve systems' learning performance on-the-fly.
User-Interfaces for Incremental Recipient and Response Time Predictions in Asynchronous Messaging BIBAFull-Text 21-24
  Connor Hamlet; Daniel Korn; Nikhil Prasad; Volodymyr Siedlecki; Eliezer Encarnacion; Jacob Bartel; Prasun Dewan
We have created a new set of existing and novel predictive user-interfaces for exchanging messages in asynchronous collaborative systems such as email and internet communities. These interfaces support predictions of tags, hierarchical recipients, and message response times. The predictions are made incrementally, as messages are composed, and are offered to both senders and receivers of messages. The user interfaces are implemented by a test-bed that also supports experiments to evaluate them. It can automate the actions of the collaborators with whom a subject exchanges messages, replay user actions, and gather and display effort and correctness metrics related to these predictions. The collaborator actions and predictions are specified using a declarative mechanism. A video demonstration of this work is available at http://youtu.be/NJt9Rfqb1ko.
Interactive Control and Visualization of Difficulty Inferences from User-Interface Commands BIBAFull-Text 25-28
  Duri Long; Nicholas Dillon; Kun Wang; Jason Carter; Prasun Dewan
Recently, there has been research on inferring user emotions. Like other inference research, it requires an iterative process in which what-if scenarios are played with different features and algorithms. Traditional, general-purpose data mining tools such as Weka have played an important part in promoting this process. We have augmented this toolset with an additional interactive test-bed designed for prediction and communication of programmer difficulties from user-interface commands. It provides end-user interfaces for communicating, correcting, and reacting to the predictions. In addition, it offers researchers user-interfaces for interacting with the prediction process as it is executed rather than, as in traditional mining tools, after it has generated data for a set of experimental subjects. These user-interfaces can be used to determine key elements of the prediction process, why certain wrong or right predictions have been made, and change parameters of the process. A video demonstration this work is available at http://youtu.be/09LpDIPG5h8.
OfficeHours: A System for Student Supervisor Matching through Reinforcement Learning BIBAFull-Text 29-32
  Yuan Gao; Kalle Ilves; Dorota Glowacka
We describe OfficeHours, a recommender system that assists students in finding potential supervisors for their dissertation projects. OfficeHours is an interactive recommender system that combines reinforcement learning techniques with a novel interface that assists the student in formulating their query and allows active engagement in directing their search. Students can directly manipulate document features (keywords) extracted from scientific articles written by faculty members to indicate their interests and reinforcement learning is used to model the student's interests by allowing the system to trade off between exploration and exploitation. The goal of system is to give the student the opportunity to more effectively search for possible project supervisors in a situation where the student may have difficulties formulating their query or when very little information may be available on faculty members' websites about their research interests.
From "Overview" to "Detail": An Exploration of Contextual Transparency for Public Transparent Interfaces BIBAFull-Text 33-36
  Heesun Kim; Bo Kyung Huh; Seung Hyen Im; Hae Youn Joung; Gyu Hyun Kwon; Ji-Hyung Park
This study explores the contextual transparency of information presented on public transparent user interfaces through the maintenance of adequate legibility. To address this issue, we investigate the relationship between the information and transparency in a shop context. In this paper, we present an experiment which examines the effects of transparency, changing user's proximity, and information types on legibility with a public transparent information system. We report significant effects on performance and legibility, and the results indicate the different contextual transparency related to the user's proximity, depending on whether the user focuses on the information or the environment. In addition, under the 50% transparency (25% and 50% levels) fit into the closer proximity while the 50% transparency offers more harmonious view in a distant context. The implications of these results to the usability of public transparent user interfaces and design recommendations are also discussed.
WallSHOP: Multiuser Interaction with Public Digital Signage using Mobile Devices for Personalized Shopping BIBAFull-Text 37-40
  Soh Masuko; Masafumi Muta; Keiji Shinzato; Adiyan Mujibiya
We propose WallSHOP, a novel interactive shopping experience that extends content sharing between publicly available digital signage and mobile devices. Multiple users can freely access and browse the content of public digital signage through a public network. Furthermore, users can interact with this content using a personalized cursor that can be controlled with a touch-screen mobile device. WallSHOP also supports pulling content from the digital signage to a user's device, allowing users to browse through available products and privately perform checkouts by utilizing the advantages of both public and private displays. WallSHOP focuses on feasibility and scalability; therefore, it is implemented using only web-based components and does not require the installation of additional software.
Surveying Older Adults About a Recommender System for a Digital Library BIBAFull-Text 41-44
  Adam N. Maus; Amy K. Atwood
We present results from a survey of adults, 63 and older, about the potential implementation of a recommender system within a digital library of health-related content. We studied how these older adults perceive the idea of a recommender system and different aspects of its design. We presented four different types of recommender systems in the survey and our results indicate that this group would prefer a system based on explicit feedback in the form of ratings that measure the helpfulness of content. Reinforcing previous research, we learned this group is interested in a system that explains why it recommended content and they do not want to spend much time creating a profile of interests to warm the system. We discuss where we would use this recommender system, how we designed the survey for our audience, and plans for future studies on this subject.
A Task-Centered Interface for On-Line Collaboration in Science BIBAFull-Text 45-48
  Felix Michel; Yolanda Gil; Varun Ratnakar; Matheus Hauder
Although collaborative activities are paramount in science, little attention has been devoted to supporting on-line scientific collaborations. Our work focuses on scientific collaborations that revolve around complex science questions that require significant coordination to synthesize multi-disciplinary findings, enticing contributors to remain engaged for extended periods of time, and continuous growth to accommodate new contributors as needed as the work evolves over time. This paper presents the interface of the Organic Data Science Wiki to address these challenges. Our solution is based on the Semantic MediaWiki and extends it with new features for scientific collaboration. We present preliminary results from the usage of the interface in a pilot research project.
VizRec: A Two-Stage Recommender System for Personalized Visualizations BIBAFull-Text 49-52
  Belgin Mutlu; Eduardo Veas; Christoph Trattner; Vedran Sabol
Identifying and using the information from distributed and heterogeneous information sources is a challenging task in many application fields. Even with services that offer well-defined structured content, such as digital libraries, it becomes increasingly difficult for a user to find the desired information. To cope with an overloaded information space, we propose a novel approach -- VizRec -- combining recommender systems (RS) and visualizations. VizRec suggests personalized visual representations for recommended data. One important aspect of our contribution and a prerequisite for VizRec are user preferences that build a personalization model. We present a crowd based evaluation and show how such a model of preferences can be elicited.
Mechanix: A Sketch-Based Educational Interface BIBAFull-Text 53-56
  Trevor Nelligan; Seth Polsley; Jaideep Ray; Michael Helms; Julie Linsey; Tracy Hammond
At the university level, high enrollment numbers in classes can be overwhelming for professors and teaching assistants to manage. Grading assignments and tests for hundreds of students is time consuming and has led towards a push for software-based learning in large university classes. Unfortunately, traditional quantitative question-and-answer mechanisms are often not sufficient for STEM courses, where there is a focus on problem-solving techniques over finding the "right" answers. Working through problems by hand can be important in memory retention, so in order for software learning systems to be effective in STEM courses, they should be able to intelligently understand students' sketches. Mechanix is a sketch-based system that allows students to step through problems designed by their instructors with personalized feedback and optimized interface controls. Optimizations like color-coding, menu bar simplification, and tool consolidation are recent improvements in Mechanix that further the aim to engage and motivate students in learning.
An Interactive Pedestrian Environment Simulator for Cognitive Monitoring and Evaluation BIBAFull-Text 57-60
  Jason Orlosky; Markus Weber; Yecheng Gu; Daniel Sonntag; Sergey Sosnovsky
Recent advances in virtual and augmented reality have led to the development of a number of simulations for different applications. In particular, simulations for monitoring, evaluation, training, and education have started to emerge for the consumer market due to the availability and affordability of immersive display technology. In this work, we introduce a virtual reality environment that provides an immersive traffic simulation designed to observe behavior and monitor relevant skills and abilities of pedestrians who may be at risk, such as elderly persons with cognitive impairments. The system provides basic reactive functionality, such as display of navigation instructions and notifications of dangerous obstacles during navigation tasks. Methods for interaction using hand and arm gestures are also implemented to allow users explore the environment in a more natural manner.
Interactive Querying over Large Network Data: Scalability, Visualization, and Interaction Design BIBAFull-Text 61-64
  Robert Pienta; Acar Tamersoy; Hanghang Tong; Alex Endert; Duen Horng (Polo) Chau
Given the explosive growth of modern graph data, new methods are needed that allow for the querying of complex graph structures without the need of a complicated querying languages; in short, interactive graph querying is desirable. We describe our work towards achieving our overall research goal of designing and developing an interactive querying system for large network data. We focus on three critical aspects: scalable data mining algorithms, graph visualization, and interaction design. We have already completed an approximate subgraph matching system called MAGE in our previous work that fulfills the algorithmic foundation allowing us to query a graph with hundreds of millions of edges. Our preliminary work on visual graph querying, Graphite, was the first step in the process to making an interactive graph querying system. We are in the process of designing the graph visualization and robust interaction needed to make truly interactive graph querying a reality.
Robot Companions and Smartpens for Improved Social Communication of Dementia Patients BIBAFull-Text 65-68
  Alexander Prange; Indra Praveen Sandrala; Markus Weber; Daniel Sonntag
In this demo paper we describe how a digital pen and a humanoid robot companion can improve the social communication of a dementia patient. We propose the use of NAO, a humanoid robot, as a companion to the dementia patient in order to continuously monitor his or her activities and provide cognitive assistance in daily life situations. For example, patients can communicate with NAO through natural language by the speech dialogue functionality we integrated. Most importantly, to improve communication, i.e., sending digital messages (texting, emails), we propose the usage of a smartpen, where the patients write messages on normal paper with an invisible dot pattern to initiate hand-writing and sketch recognition in real-time. The smartpen application is embedded into the human-robot speech dialogue.
Data Privacy and Security Considerations for Personal Assistants for Learning (PAL) BIBAFull-Text 69-72
  Elaine M. Raybourn; Nathan Fabian; Warren Davis; Raymond C. Parks; Jonathan McClain; Derek Trumbo; Damon Regan; Paula Durlach
A hypothetical scenario is utilized to explore privacy and security considerations for intelligent systems, such as a Personal Assistant for Learning (PAL). Two categories of potential concerns are addressed: factors facilitated by user models, and factors facilitated by systems. Among the strategies presented for risk mitigation is a call for ongoing, iterative dialog among privacy, security, and personalization researchers during all stages of development, testing, and deployment.
Framework for Realizing a Free-Target Eye-tracking System BIBAFull-Text 73-76
  Daiki Sakai; Michiya Yamamoto; Takashi Nagamatsu
Various eye-trackers have recently become commercially available, but studies on more high-spec eye-tracking system have been conducted. Especially, studies have shown that conventional eye-trackers are rather inflexible in layout. The cameras employed in these eye-trackers, as well as the light sources and user's position are fixed, and only a predefined plane can be the target of the eye-tracking. In this study, we propose a new framework that we call a Free-Target Eye-tracking System, which consists of eye-tracking hardware and a hardware layout solver. We developed a prototype of a hardware layout solver and demonstrated its effectiveness.
Intelligent Search for Biologically Inspired Design BIBAFull-Text 77-80
  Evangelia Spiliopoulou; Spencer Rugaber; Ashok Goel; Lianghao Chen; Bryan Wiltgen; Arvind Krishnaa Jagannathan
In Biologically Inspired Design (BID), engineers use biology as a source of ideas for solving engineering problems. However, locating relevant literature is difficult due to vocabulary differences and lack of domain knowledge. IBID is an intelligent search mechanism that uses a functional taxonomy to direct search and a formal modeling notation for annotating relevant search targets.
USHER: An Intelligent Tour Companion BIBAFull-Text 81-84
  Shubham Toshniwal; Parikshit Sharma; Saurabh Srivastava; Richa Sehgal
Audio Guides have been the prevalent mode of information delivery in public spaces such as Museums and Art Galleries. These devices are programmed to render static information to their users about the collections and artworks present and require human input to operate. The inability to automatically deliver contextual messages and the lack of interactivity are major hurdles to ensuring a rich and seamless user experience. Ubiquitous smartphones can be leveraged to create pervasive audio guides that provide rich and personalized user experience. In this paper, we present the design and implementation of "Usher", an intelligent tour companion. Usher provides three distinct advantages over traditional audio guides. First, Usher uses smartphone sensors to infer user context such as his physical location, locomotive state and orientation to deliver relevant information to the user. Second, Usher also provides interface to a cognitive Question Answer (QA) service for the inquisitive users and answers contextual queries. Finally, Usher notifies users if any of their social media friends are present in the vicinity. The ability to seamlessly track user context to provide rich semantic information and the cognitive capability to answer contextual queries means that Usher can enhance the user experience in a museum by multitudes.
Human-Machine Cooperative Viewing System for Wide-angle Multi-view Videos BIBAFull-Text 85-88
  Fumiharu Tomiyasu; Kenji Mase
Wide-angle multi-view video, which provides viewers with a realistic experience, has received increasing attention in recent years. Users want to watch such videos interactively, switching viewpoints freely, but without the burdens of consecutive viewpoint selection or complex operation. Viewing systems should therefore satisfy these conflicting needs simultaneously. In this paper, we take the novel approach of confronting multi-view videos as a cooperative work. We also introduce a human-machine cooperative viewing system for wide-angle multi-view videos exploiting target-centered viewing. Our system consists of a manual viewpoint selection function and an automatic viewpoint selection function based on our concept.
Hairware: Conductive Hair Extensions as a Capacitive Touch Input Device BIBAFull-Text 89-92
  Katia Vega; Marcio Cunha; Hugo Fuks
Our aim is to use our own bodies as an interactive platform. We are trying to move away from traditional wearable devices worn on clothes and accessories where gestures are noticeable and remind cyborg looking. We follow Beauty Technology paradigm that uses the body's surface as an interactive platform by integrating technology into beauty products applied directly to one's skin, fingernails and hair. Thus, we propose Hairware, a Beauty Technology Prototype that connects chemically metalized hair extensions to a microcontroller turning it into an input device for triggering different objects. Hairware acts as a capacitive touch sensor that detects touch variations on hair and uses machine learning algorithms in order to recognize user's intention. In this way, we add a new functionality to hair extensions, becoming a seamless device that recognizes auto-contact behaviors that no observers would identify. This work presents the design of Hairware's hardware and software implementation. In this demo, we show Hairware acting as a controller for smartphones and computers.
MindMiner: Quantifying Entity Similarity via Interactive Distance Metric Learning BIBAFull-Text 93-96
  Xiangmin Fan; Youming Liu; Nan Cao; Jason Hong; Jingtao Wang
We present MindMiner, a mixed-initiative interface for capturing subjective similarity measurements via a combination of new interaction techniques and machine learning algorithms. MindMiner collects qualitative, hard to express similarity measurements from users via active polling with uncertainty and example based visual constraint creation. MindMiner also formulates human prior knowledge into a set of inequalities and learns a quantitative similarity distance metric via convex optimization. In a 12-participant peer-review understanding task, we found MindMiner was easy to learn and use, and could capture users' implicit knowledge about writing performance and cluster target entities into groups that match subjects' mental models.
A Model for Data-Driven Sonification Using Soundscapes BIBAFull-Text 97-100
  KatieAnna E. Wolf; Genna Gliner; Rebecca Fiebrink
A sonification is a rendering of audio in response to data, and is used in instances where visual representations of data are impossible, difficult, or unwanted. Designing sonifications often requires knowledge in multiple areas as well as an understanding of how the end users will use the system. This makes it an ideal candidate for end-user development where the user plays a role in the creation of the design. We present a model for sonification that utilizes user-specified examples and data to generate cross-domain mappings from data to sound. As a novel contribution we utilize soundscapes (acoustic scenes) for these user-selected examples to define a structure for the sonification. We demonstrate a proof of concept of our model using sound examples and discuss how we plan to build on this work in the future.

Student Consortium

Towards a Crowd-based Picture Schematization System BIBAFull-Text 101-104
  Huaming Rao
Picture schematization has shown to be a very important step for a wide range of applications and remains challenging as an active research area. Many automation methods have been proposed to solve this problem but yet far from expectation. Since crowdsourcing has been successfully applied to fill the gap that AI can not reach yet, we are interested to explore how crowdsourcing can be utilized into this area. This paper briefly summarizes our recent efforts towards building a feasible crowd-based system to schematise pictures in a reliable and cost-effective way.
Visual Text Analytics for Asynchronous Online Conversations BIBAFull-Text 105-108
  Enamul Hoque
In the last decade, there has been an exponential growth of online conversations thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I aim to investigate how to integrate Information Visualization with Natural Language Processing techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I consider the following approaches: apply design study methodology in InfoVis to uncover data and task abstractions; apply NLP methods for extracting the identified data to support those tasks; and incorporate human feedback in the text analysis process when the extracted data is noisy and/or may not match the user's mental model, and current tasks. Through a set of design studies, I aim to evaluate the effectiveness of our approaches.
Real-Time Emotion Detection for Neuro-Adaptive Systems BIBAFull-Text 109-112
  Kathrin Pollmann
Our research explores possibilities to apply neurophysiological methods for real-time emotion detection during human-technology interaction. The present paper outlines our scientific approach, research plan and methodology.
Clinical Text Analysis Using Interactive Natural Language Processing BIBAFull-Text 113-116
  Gaurav Trivedi
Natural Language Processing (NLP) systems are typically developed by informaticists skilled in machine learning techniques that are unfamiliar to end-users. Although NLP has been widely used in extracting information from clinical text, current systems generally do not provide any provisions for incorporating feedback and revising models based on input from domain experts. The goal of this research is to close this gap by building highly-usable tools suitable for the analysis of free text reports.
Perceptive Home Energy Interfaces: Navigating the Dynamic Household Structure BIBAFull-Text 117-120
  Germaine Irwin
Much discussion has taken place regarding environmental sustainability, fossil fuels and other efforts to reverse the trend of global climate change. Unfortunately, individuals often choose the path of least resistance when making home energy decisions. Thus, it is imperative to consider all of the underlying causes that influence home energy consumption in an effort to build a more perceptive interface that addresses the variety of household occupant needs.
   This research seeks to explore the dynamic nature of household occupancy, individual comfort and situational variants that impact home energy consumption in an effort to discover critical design factors for building novel interfaces for home energy systems.
On-Body Interaction for Optimized Accessibility BIBAFull-Text 121-124
  David Costa
This thesis addresses the suitability of body interaction techniques, when used by persons with different levels of visual impairment, to improve the accessibility of mobile devices. The research will focus on: understanding how on-body interaction can surpass the current accessibility levels of mobile devices, characterizing the different complexities of skin mapping for different levels of visual impairment, perceiving for what types of input tasks it is best suited, and studying how it can complement other input modalities. Results will include an on-body interaction model, and several prototypes and studies characterizing body interaction.
Assisting End Users in the Design of Sonification Systems BIBAFull-Text 125-128
  KatieAnna E. Wolf
In my dissertation I plan to explore the design of digital systems and how we can support users in the design process. Specifically, I focus on the design of sonifications, which are the representation of data using sound. Creating the algorithm that maps data to sound is not an easy task as there are many things to consider: an individual's aesthetic preferences, multiple dimensions of sound, complexities of the data to be represented, and previously developed theories for how to convey information using sound. This makes it an ideal domain for end-user development and data-driven design creation.
Multimodal Interactive Machine Learning for User Understanding BIBAFull-Text 129-132
  Xuan Guo
Designing intelligent computer interfaces requires human intelligence, which can be captured through multimodal sensors during human-computer interactions. These data modalities may involve users' language, vision, and body signals, which shed light on different aspects of human cognition and behaviors. I propose to integrate multimodal data to more effectively understand users during interactions. Since users' manipulation of big data (e.g., texts, images, videos) through interfaces can be computationally intensive, an interactive machine learning framework will be constructed in an unsupervised manner.
A Revisit to The Identification of Contexts in Recommender Systems BIBAFull-Text 133-136
  Yong Zheng
In contrast to traditional recommender systems (RS), context-aware recommender systems (CARS) emerged to adapt to users' preferences in various contextual situations. During those years, different context-aware recommendation algorithms have been developed and they are able to demonstrate the effectiveness of CARS. However, this field has yet to agree on the definition of context, where researchers may incorporate diversified variables (e.g., user profiles or item features), which further creates confusions between content-based RS and context-based RS, and positions the problem of context identification in CARS. In this paper, we revisit the definition of contexts in recommender systems, and propose a context identification framework to clarify the preliminary selection of contextual variables, which may further assist interpretation of contextual effects in RS.
AmbLEDs: Context-Aware I/O for AAL Systems BIBAFull-Text 137-140
  Marcio Cunha
Ambient Assisted Living (AAL) applications aim to allow elderly, sick and disabled people to stay safely at home while collaboratively assisted by their family, friends and medical staff. In principle, AAL amalgamated with Internet of Things (IoT) introduces a new healthcare connectivity paradigm that interconnects mobile apps and sensors allowing constant monitoring of the patient. By hiding technology into light fixtures, in this thesis proposal we present AmbLEDs, a ambient light sensing system, as an alternative to spreading sensors that are perceived as invasive, such as cameras, microphones, microcontrollers, tags or wearables, in order to create a crowdware ubiquitous context-aware interface for recognizing, informing and alerting home environmental changes and human activities to support continuous proactive care.
Know your Surroundings with an Interactive Map BIBAFull-Text 141-144
  Sanorita Dey
The advancement of mobile technology inspired research communities to achieve centimeter level accuracy in indoor positioning systems [2]. But to get the best out of it, we need assisting navigation applications that will not only help us to reach the destination quickly but will also make us familiar with the surroundings. To address this concern, we propose a two stage approach which can help pedestrians navigate in an indoor location and simultaneously enhance their spatial awareness. In the first stage, we will conduct a behavioral user study to identify the prominent behavioral patterns during different navigational challenges. Once the behavioral state model is prepared, we need to analyze the sensor data in multiple dimensions and build a dynamic sensor state model. This model will enable us to map the behavioral state model to the sensor states and draw a direct one to one relation between the two.
Extended Virtual Presence of Therapists through Home Service Robots BIBFull-Text 145-148
  Hee-Tae Jung