Human-Centred Machine Learning
Workshop Summaries
/
Gillies, Marco
/
Fiebrink, Rebecca
/
Tanaka, Atau
/
Garcia, Jérémie
/
Bevilacqua, Frédéric
/
Heloir, Alexis
/
Nunnari, Fabrizio
/
Mackay, Wendy
/
Amershi, Saleema
/
Lee, Bongshin
/
d'Alessandro, Nicolas
/
Tilmanne, Joëlle
/
Kulesza, Todd
/
Caramiaux, Baptiste
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3558-3565
© Copyright 2016 ACM
Summary: Machine learning is one of the most important and successful techniques in
contemporary computer science. It involves the statistical inference of models
(such as classifiers) from data. It is often conceived in a very impersonal
way, with algorithms working autonomously on passively collected data. However,
this viewpoint hides considerable human work of tuning the algorithms,
gathering the data, and even deciding what should be modeled in the first
place. Examining machine learning from a human-centered perspective includes
explicitly recognising this human work, as well as reframing machine learning
workflows based on situated human working practices, and exploring the
co-adaptation of humans and systems. A human-centered understanding of machine
learning in human context can lead not only to more usable machine learning
tools, but to new ways of framing learning computationally. This workshop will
bring together researchers to discuss these issues and suggest future research
questions aimed at creating a human-centered approach to machine learning.
ModelTracker: Redesigning Performance Analysis Tools for Machine Learning
Understanding & Evaluating Performance
/
Amershi, Saleema
/
Chickering, Max
/
Drucker, Steven M.
/
Lee, Bongshin
/
Simard, Patrice
/
Suh, Jina
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.337-346
© Copyright 2015 ACM
Summary: Model building in machine learning is an iterative process. The performance
analysis and debugging step typically involves a disruptive cognitive switch
from model building to error analysis, discouraging an informed approach to
model building. We present ModelTracker, an interactive visualization that
subsumes information contained in numerous traditional summary statistics and
graphs while displaying example-level performance and enabling direct error
examination and debugging. Usage analysis from machine learning practitioners
building real models with ModelTracker over six months shows ModelTracker is
used often and throughout model building. A controlled experiment focusing on
ModelTracker's debugging capabilities shows participants prefer ModelTracker
over traditional tools without a loss in model performance.
Structured labeling for facilitating concept evolution in machine learning
Decisions, recommendations, and machine learning
/
Kulesza, Todd
/
Amershi, Saleema
/
Caruana, Rich
/
Fisher, Danyel
/
Charles, Denis
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3075-3084
© Copyright 2014 ACM
Summary: Labeling data is a seemingly simple task required for training many machine
learning systems, but is actually fraught with problems. This paper introduces
the notion of concept evolution, the changing nature of a person's underlying
concept (the abstract notion of the target class a person is labeling for,
e.g., spam email, travel related web pages) which can result in inconsistent
labels and thus be detrimental to machine learning. We introduce two structured
labeling solutions, a novel technique we propose for helping people define and
refine their concept in a consistent manner as they label. Through a series of
five experiments, including a controlled lab study, we illustrate the impact
and dynamics of concept evolution in practice and show that structured labeling
helps people label more consistently in the presence of concept evolution than
traditional labeling.
LiveAction: Automating Web Task Model Generation
/
Amershi, Saleema
/
Mahmud, Jalal
/
Nichols, Jeffrey
/
Lau, Tessa
/
Ruiz, German Attanasio
ACM Transactions on Interactive Intelligent Systems
2013-10
v.3
n.3
p.14
© Copyright 2013 ACM
Summary: Task automation systems promise to increase human productivity by assisting
us with our mundane and difficult tasks. These systems often rely on people to
(1) identify the tasks they want automated and (2) specify the procedural steps
necessary to accomplish those tasks (i.e., to create task models). However, our
interviews with users of a Web task automation system reveal that people find
it difficult to identify tasks to automate and most do not even believe they
perform repetitive tasks worthy of automation. Furthermore, even when
automatable tasks are identified, the well-recognized difficulties of
specifying task steps often prevent people from taking advantage of these
automation systems.
In this research, we analyze real Web usage data and find that people do in
fact repeat behaviors on the Web and that automating these behaviors,
regardless of their complexity, would reduce the overall number of actions
people need to perform when completing their tasks, potentially saving time.
Motivated by these findings, we developed LiveAction, a fully-automated
approach to generating task models from Web usage data. LiveAction models can
be used to populate the task model repositories required by many automation
systems, helping us take advantage of automation in our everyday lives.
IUI workshop on interactive machine learning
Workshops
/
Amershi, Saleema
/
Cakmak, Maya
/
Knox, W. Bradley
/
Kulesza, Todd
/
Lau, Tessa
Proceedings of the 2013 International Conference on Intelligent User
Interfaces
2013-03-19
v.2
p.121-124
© Copyright 2013 ACM
Summary: Many applications of Machine Learning (ML) involve interactions with humans.
Humans may provide input to a learning algorithm (in the form of labels,
demonstrations, corrections, rankings or evaluations) while observing its
outputs (in the form of feedback, predictions or executions). Although humans
are an integral part of the learning process, traditional ML systems used in
these applications are agnostic to the fact that inputs/outputs are from/for
humans.
However, a growing community of researchers at the intersection of ML and
human-computer interaction are making interaction with humans a central part of
developing ML systems. These efforts include applying interaction design
principles to ML systems, using human-subject testing to evaluate ML systems
and inspire new methods, and changing the input and output channels of ML
systems to better leverage human capabilities. With this Interactive Machine
Learning (IML) workshop at IUI 2013 we aim to bring this community together to
share ideas, get up-to-date on recent advances, progress towards a common
framework and terminology for the field, and discuss the open questions and
challenges of IML.
Regroup: interactive machine learning for on-demand group creation in social
networks
AI & machine-learning & translation
/
Amershi, Saleema
/
Fogarty, James
/
Weld, Daniel
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.21-30
© Copyright 2012 ACM
Summary: We present ReGroup, a novel end-user interactive machine learning system for
helping people create custom, on demand groups in online social networks. As a
person adds members to a group, ReGroup iteratively learns a probabilistic
model of group membership specific to that group. ReGroup then uses its
currently learned model to suggest additional members and group characteristics
for filtering. Our evaluation shows that ReGroup is effective for helping
people create large and varied groups, whereas traditional methods (searching
by name or selecting from an alphabetical list) are better suited for small
groups whose members can be easily recalled by name. By facilitating on demand
group creation, ReGroup can enable in-context sharing and potentially encourage
better online privacy practices. In addition, applying interactive machine
learning to social network group creation introduces several challenges for
designing effective end-user interaction with machine learning. We identify
these challenges and discuss how we address them in ReGroup.
Designing for effective end-user interaction with machine learning
Doctoral symposium
/
Amershi, Saleema
Proceedings of the 2011 ACM Symposium on User Interface Software and
Technology
2011-10-16
v.2
p.47-50
© Copyright 2011 ACM
Summary: End-user interactive machine learning is a promising tool for enhancing
human capabilities with large data. Recent work has shown that we can create
end-user interactive machine learning systems for specific applications.
However, we still lack a generalized understanding of how to design effective
end-user interaction with interactive machine learning systems. My dissertation
work aims to advance our understanding of this question by investigating new
techniques that move beyond naïve or ad-hoc approaches and balance the
needs of both end-users and machine learning algorithms. Although these
explorations are grounded in specific applications, we endeavored to design
strategies independent of application or domain specific features. As a result,
our findings can inform future end-user interaction with machine learning
systems.
CueT: human-guided fast and accurate network alarm triage
Machine learning
/
Amershi, Saleema
/
Lee, Bongshin
/
Kapoor, Ashish
/
Mahajan, Ratul
/
Christian, Blaine
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.1
p.157-166
© Copyright 2011 ACM
Summary: Network alarm triage refers to grouping and prioritizing a stream of
low-level device health information to help operators find and fix problems.
Today, this process tends to be largely manual because existing tools cannot
easily evolve with the network. We present CueT, a system that uses interactive
machine learning to learn from the triaging decisions of operators. It then
uses that learning in novel visualizations to help them quickly and accurately
triage alarms. Unlike prior interactive machine learning systems, CueT handles
a highly dynamic environment where the groups of interest are not known
a-priori and evolve constantly. A user study with real operators and data from
a large network shows that CueT significantly improves the speed and accuracy
of alarm triage compared to the network's current practice.
Examining multiple potential models in end-user interactive concept learning
Machine learning and web interactions
/
Amershi, Saleema
/
Fogarty, James
/
Kapoor, Ashish
/
Tan, Desney
Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems
2010-04-10
v.1
p.1357-1360
Keywords: end-user interactive concept learning
© Copyright 2010 ACM
Summary: End-user interactive concept learning is a technique for interacting with
large unstructured datasets, requiring insights from both human-computer
interaction and machine learning. This note re-examines an assumption implicit
in prior interactive machine learning research, that interaction should focus
on the question "what class is this object?". We broaden interaction to include
examination of multiple potential models while training a machine learning
system. We evaluate this approach and find that people naturally adopt revision
in the interactive machine learning process and that this improves the quality
of their resulting models for difficult concepts.
Multiple mouse text entry for single-display groupware
Groupware technologies
/
Amershi, Saleema
/
Morris, Meredith Ringel
/
Moraveji, Neema
/
Balakrishnan, Ravin
/
Toyama, Kentaro
Proceedings of ACM CSCW'10 Conference on Computer-Supported Cooperative Work
2010-02-06
p.169-178
Keywords: children, education, ictd, multiple mouse, sdg, text entry
© Copyright 2010 ACM
Summary: A recent trend in interface design for classrooms in developing regions has
many students interacting on the same display using mice. Text entry has
emerged as an important problem preventing such mouse-based single-display
groupware systems from offering compelling interactive activities. We explore
the design space of mouse-based text entry and develop 13 techniques with novel
characteristics suited to the multiple mouse scenario. We evaluated these in a
3-phase study over 14 days with 40 students in 2 developing region schools. The
results show that one technique effectively balanced all of our design
dimensions, another was most preferred by students, and both could benefit from
augmentation to support collaborative interaction. Our results also provide
insights into the factors that create an optimal text entry technique for
single-display groupware systems.
Overview based example selection in end user interactive concept learning
The tangled web we weave
/
Amershi, Saleema
/
Fogarty, James
/
Kapoor, Ashish
/
Tan, Desney
Proceedings of the 2009 ACM Symposium on User Interface Software and
Technology
2009-10-04
p.247-256
Keywords: end-user interactive concept learning
© Copyright 2009 ACM
Summary: Interaction with large unstructured datasets is difficult because existing
approaches, such as keyword search, are not always suited to describing
concepts corresponding to the distinctions people want to make within datasets.
One possible solution is to allow end users to train machine learning systems
to identify desired concepts, a strategy known as interactive concept learning.
A fundamental challenge is to design systems that preserve end user flexibility
and control while also guiding them to provide examples that allow the machine
learning system to effectively learn the desired concept. This paper presents
our design and evaluation of four new overview based approaches to guiding
example selection. We situate our explorations within CueFlik, a system
examining end user interactive concept learning in Web image search. Our
evaluation shows our approaches not only guide end users to select better
training examples than the best performing previous design for this
application, but also reduce the impact of not knowing when to stop training
the system. We discuss challenges for end user interactive concept learning
systems and identify opportunities for future research on the effective design
of such systems.
Amplifying community content creation with mixed initiative information
extraction
Advanced web scenarios
/
Hoffmann, Raphael
/
Amershi, Saleema
/
Patel, Kayur
/
Wu, Fei
/
Fogarty, James
/
Weld, Daniel S.
Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems
2009-04-04
v.1
p.1849-1858
Keywords: community content creation, information extraction, mixed-initiative
interfaces
© Copyright 2009 ACM
Summary: Although existing work has explored both information extraction and
community content creation, most research has focused on them in isolation. In
contrast, we see the greatest leverage in the synergistic pairing of these
methods as two interlocking feedback cycles. This paper explores the potential
synergy promised if these cycles can be made to accelerate each other by
exploiting the same edits to advance both community content creation and
learning-based information extraction. We examine our proposed synergy in the
context of Wikipedia infoboxes and the Kylin information extraction system.
After developing and refining a set of interfaces to present the verification
of Kylin extractions as a non primary task in the context of Wikipedia
articles, we develop an innovative use of Web search advertising services to
study people engaged in some other primary task. We demonstrate our proposed
synergy by analyzing our deployment from two complementary perspectives: (1) we
show we accelerate community content creation by using Kylin's information
extraction to significantly increase the likelihood that a person visiting a
Wikipedia article as a part of some other primary task will spontaneously
choose to help improve the article's infobox, and (2) we show we accelerate
information extraction by using contributions collected from people interacting
with our designs to significantly improve Kylin's extraction performance.
Co-located collaborative web search: understanding status quo practices
Spotlight on work in progress session 1
/
Amershi, Saleema
/
Morris, Meredith Ringel
Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems
2009-04-04
v.2
p.3637-3642
Keywords: collaborative search, search interfaces, web search
© Copyright 2009 ACM
Summary: Co-located collaborative Web search is a surprisingly common activity,
despite the fact that Web browsers and search engines are not designed to
support collaboration. We report the findings of two studies (a diary study and
an observational study) that provide insights regarding the frequency of
co-located collaborative searching, the strategies participants use, and the
pros and cons of these strategies. We then articulate design implications for
next-generation tools that could enhance the experience of co-located
collaborative search.
CoSearch: a system for co-located collaborative web search
Collaboration and Cooperation
/
Amershi, Saleema
/
Morris, Meredith Ringel
Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems
2008-04-05
v.1
p.1647-1656
© Copyright 2008 ACM
Summary: Web search is often viewed as a solitary task; however, there are many
situations in which groups of people gather around a single computer to jointly
search for information online. We present the findings of interviews with
teachers, librarians, and developing world researchers that provide details
about users' collaborative search habits in shared-computer settings, revealing
several limitations of this practice. We then introduce CoSearch, a system we
developed to improve the experience of co-located collaborative Web search by
leveraging readily available devices such as mobile phones and extra mice.
Finally, we present an evaluation comparing CoSearch to status quo
collaboration approaches, and show that CoSearch enabled distributed control
and division of labor, thus reducing the frustrations associated with
shared-computer searches, while still preserving the positive aspects of
communication and collaboration associated with joint computer use.
Pedagogy and usability in interactive algorithm visualizations: Designing
and evaluating CIspace
/
Amershi, Saleema
/
Carenini, Giuseppe
/
Conati, Cristina
/
Mackworth, Alan K.
/
Poole, David
Interacting with Computers
2008
v.20
n.1
p.64-96
Keywords: Interactive algorithm visualization; Pedagogy; Design; Evaluation; Human
factors; Artificial intelligence
© Copyright 2008 Elsevier B.V.
1. Introduction
2. Background
3. CIspace goals
3.1. Pedagogical goals
3.2. Usability goals
4. CIspace design for pedagogical and usability goals
4.1. Introduction to CSPs and AC-3
4.2. Design features
4.2.1. Accessibility
4.2.2. Coverage and modularity
4.2.3. Consistency
4.2.4. Graph-based visual representations
4.2.5. Sample problems
4.2.6. Create new problems
4.2.7. Interaction
4.2.8. System help
5. Evaluation
5.1. Evaluation 1: Semi-formal usability testing
5.2. Evaluation 2: Controlled experiment measuring knowledge acquisition
5.2.1. Materials
5.2.2. Procedure
5.2.3. Discussion of results
5.3. Evaluation 3: Usability survey in advanced AI course
5.4. Evaluation 4: Controlled experiment measuring preference
5.4.1. Materials
5.4.2. Procedure
5.4.3. Discussion of results
5.5. Evaluation 5: Usability survey in introductory AI course
6. Future work
7. Conclusions
Acknowledgements
Appendix A. Written sample constraint satisfaction problems
Appendix B. Tests
B.1. Pre-test
B.2. Post-test
Appendix C. Questionnaires for pedagogical experiment 1
C.1. Non-applet group questionnaire
C.2. Applet group questionnaire
Appendix D. Questionnaires for pedagogical experiment 2
D.1. Questionnaire 1
D.2. Questionnaire 2
References
Summary: Interactive algorithm visualizations (AVs) are powerful tools for teaching
and learning concepts that are difficult to describe with static media alone.
However, while countless AVs exist, their widespread adoption by the academic
community has not occurred due to usability problems and mixed results of
pedagogical effectiveness reported in the AV and education literature. This
paper presents our experiences designing and evaluating CIspace, a set of
interactive AVs for demonstrating fundamental Artificial Intelligence
algorithms. In particular, we first review related work on AVs and theories of
learning. Then, from this literature, we extract and compile a taxonomy of
goals for designing interactive AVs that address key pedagogical and usability
limitations of existing AVs. We advocate that differentiating between goals and
design features that implement these goals will help designers of AVs make more
informed choices, especially considering the abundance of often conflicting and
inconsistent design recommendations in the AV literature. We also describe and
present the results of a range of evaluations that we have conducted on CIspace
that include semi-formal usability studies, usability surveys from actual
students using CIspace as a course resource, and formal user studies designed
to assess the pedagogical effectiveness of CIspace in terms of both knowledge
gain and user preference. Our main results show that (i) studying with our
interactive AVs is at least as effective at increasing student knowledge as
studying with carefully designed paper-based materials; (ii) students like
using our interactive AVs more than studying with the paper-based materials;
(iii) students use both our interactive AVs and paper-based materials in
practice although they are divided when forced to choose between them; (iv)
students find our interactive AVs generally easy to use and useful. From these
results, we conclude that while interactive AVs may not be universally
preferred by students, it is beneficial to offer a variety of learning media to
students to accommodate individual learning preferences. We hope that our
experiences will be informative for other developers of interactive AVs, and
encourage educators to exploit these potentially powerful resources in
classrooms and other learning environments.
Unsupervised and supervised machine learning in user modeling for
intelligent learning environments
User modeling
/
Amershi, Saleema
/
Conati, Cristina
Proceedings of the 2007 International Conference on Intelligent User
Interfaces
2007-01-28
p.72-81
© Copyright 2007 ACM
Summary: In this research, we outline a user modeling framework that uses both
unsupervised and supervised machine learning in order to reduce development
costs of building user models, and facilitate transferability. We apply the
framework to model student learning during interaction with the Adaptive Coach
for Exploration (ACE) learning environment (using both interface and
eye-tracking data). In addition to demonstrating framework effectiveness, we
also compare results from previous research on applying the framework to a
different learning environment and data type. Our results also confirm previous
research on the value of using eye-tracking data to assess student learning.