Human-Centred Machine Learning
Workshop Summaries
/
Gillies, Marco
/
Fiebrink, Rebecca
/
Tanaka, Atau
/
Garcia, Jérémie
/
Bevilacqua, Frédéric
/
Heloir, Alexis
/
Nunnari, Fabrizio
/
Mackay, Wendy
/
Amershi, Saleema
/
Lee, Bongshin
/
d'Alessandro, Nicolas
/
Tilmanne, Joëlle
/
Kulesza, Todd
/
Caramiaux, Baptiste
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3558-3565
© Copyright 2016 ACM
Summary: Machine learning is one of the most important and successful techniques in
contemporary computer science. It involves the statistical inference of models
(such as classifiers) from data. It is often conceived in a very impersonal
way, with algorithms working autonomously on passively collected data. However,
this viewpoint hides considerable human work of tuning the algorithms,
gathering the data, and even deciding what should be modeled in the first
place. Examining machine learning from a human-centered perspective includes
explicitly recognising this human work, as well as reframing machine learning
workflows based on situated human working practices, and exploring the
co-adaptation of humans and systems. A human-centered understanding of machine
learning in human context can lead not only to more usable machine learning
tools, but to new ways of framing learning computationally. This workshop will
bring together researchers to discuss these issues and suggest future research
questions aimed at creating a human-centered approach to machine learning.
Principles of Explanatory Debugging to Personalize Interactive Machine
Learning
Interactive Machine Learning / Decision Making / Topic Modeling / Robotics
/
Kulesza, Todd
/
Burnett, Margaret
/
Wong, Weng-Keen
/
Stumpf, Simone
Proceedings of the 2015 International Conference on Intelligent User
Interfaces
2015-03-29
v.1
p.126-137
© Copyright 2015 ACM
Summary: How can end users efficiently influence the predictions that machine
learning systems make on their behalf? This paper presents Explanatory
Debugging, an approach in which the system explains to users how it made each
of its predictions, and the user then explains any necessary corrections back
to the learning system. We present the principles underlying this approach and
a prototype instantiating it. An empirical evaluation shows that Explanatory
Debugging increased participants' understanding of the learning system by 52%
and allowed participants to correct its mistakes up to twice as efficiently as
participants using a traditional learning system.
Structured labeling for facilitating concept evolution in machine learning
Decisions, recommendations, and machine learning
/
Kulesza, Todd
/
Amershi, Saleema
/
Caruana, Rich
/
Fisher, Danyel
/
Charles, Denis
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3075-3084
© Copyright 2014 ACM
Summary: Labeling data is a seemingly simple task required for training many machine
learning systems, but is actually fraught with problems. This paper introduces
the notion of concept evolution, the changing nature of a person's underlying
concept (the abstract notion of the target class a person is labeling for,
e.g., spam email, travel related web pages) which can result in inconsistent
labels and thus be detrimental to machine learning. We introduce two structured
labeling solutions, a novel technique we propose for helping people define and
refine their concept in a consistent manner as they label. Through a series of
five experiments, including a controlled lab study, we illustrate the impact
and dynamics of concept evolution in practice and show that structured labeling
helps people label more consistently in the presence of concept evolution than
traditional labeling.
IUI workshop on interactive machine learning
Workshops
/
Amershi, Saleema
/
Cakmak, Maya
/
Knox, W. Bradley
/
Kulesza, Todd
/
Lau, Tessa
Proceedings of the 2013 International Conference on Intelligent User
Interfaces
2013-03-19
v.2
p.121-124
© Copyright 2013 ACM
Summary: Many applications of Machine Learning (ML) involve interactions with humans.
Humans may provide input to a learning algorithm (in the form of labels,
demonstrations, corrections, rankings or evaluations) while observing its
outputs (in the form of feedback, predictions or executions). Although humans
are an integral part of the learning process, traditional ML systems used in
these applications are agnostic to the fact that inputs/outputs are from/for
humans.
However, a growing community of researchers at the intersection of ML and
human-computer interaction are making interaction with humans a central part of
developing ML systems. These efforts include applying interaction design
principles to ML systems, using human-subject testing to evaluate ML systems
and inspire new methods, and changing the input and output channels of ML
systems to better leverage human capabilities. With this Interactive Machine
Learning (IML) workshop at IUI 2013 we aim to bring this community together to
share ideas, get up-to-date on recent advances, progress towards a common
framework and terminology for the field, and discuss the open questions and
challenges of IML.
Tell me more?: the effects of mental model soundness on personalizing an
intelligent agent
AI & machine-learning & translation
/
Kulesza, Todd
/
Stumpf, Simone
/
Burnett, Margaret
/
Kwan, Irwin
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.1-10
© Copyright 2012 ACM
Summary: What does a user need to know to productively work with an intelligent
agent? Intelligent agents and recommender systems are gaining widespread use,
potentially creating a need for end users to understand how these systems
operate in order to fix their agent's personalized behavior. This paper
explores the effects of mental model soundness on such personalization by
providing structural knowledge of a music recommender system in an empirical
study. Our findings show that participants were able to quickly build sound
mental models of the recommender system's reasoning, and that participants who
most improved their mental models during the study were significantly more
likely to make the recommender operate to their satisfaction. These results
suggest that by helping end users understand a system's reasoning, intelligent
agents may elicit more and better feedback, thus more closely aligning their
output with each user's intentions.
Towards recognizing "cool": can end users help computer vision recognize
subjective attributes of objects in images?
Poster presentation
/
Curran, William
/
Moore, Travis
/
Kulesza, Todd
/
Wong, Weng-Keen
/
Todorovic, Sinisa
/
Stumpf, Simone
/
White, Rachel
/
Burnett, Margaret
Proceedings of the 2012 International Conference on Intelligent User
Interfaces
2012-02-14
p.285-288
© Copyright 2012 ACM
Summary: Recent computer vision approaches are aimed at richer image interpretations
that extend the standard recognition of objects in images (e.g., cars) to also
recognize object attributes (e.g., cylindrical, has-stripes, wet). However, the
more idiosyncratic and abstract the notion of an object attribute (e.g., cool
car), the more challenging the task of attribute recognition. This paper
considers whether end users can help vision algorithms recognize highly
idiosyncratic attributes, referred to here as subjective attributes. We
empirically investigated how end users recognized three subjective attributes
of carscool, cute, and classic. Our results suggest the feasibility of vision
algorithms recognizing subjective attributes of objects, but an interactive
approach beyond standard supervised learning from labeled training examples is
needed.
An explanation-centric approach for personalizing intelligent agents
Doctoral consortium
/
Kulesza, Todd
Proceedings of the 2012 International Conference on Intelligent User
Interfaces
2012-02-14
p.375-378
© Copyright 2012 ACM
Summary: Intelligent agents are becoming ubiquitous in the lives of users, but the
research community has only recently begun to study how people establish trust
in and communicate with such agents. I plan to design an explanation-centric
approach to support end users in personalizing their intelligent agents and in
assessing their strengths and weaknesses. My goal is to define an approach that
helps people understand when they can rely on their intelligent agents'
decisions, and allows them to directly debug their agents' reasoning when it
does not align with their own.
Why-oriented end-user debugging of naive Bayes text classification
/
Kulesza, Todd
/
Stumpf, Simone
/
Wong, Weng-Keen
/
Burnett, Margaret M.
/
Perona, Stephen
/
Ko, Andrew
/
Oberst, Ian
ACM Transactions on Interactive Intelligent Systems
2011-10
v.1
n.1
p.2
© Copyright 2011 ACM
Summary: Machine learning techniques are increasingly used in intelligent assistants,
that is, software targeted at and continuously adapting to assist end users
with email, shopping, and other tasks. Examples include desktop SPAM filters,
recommender systems, and handwriting recognition. Fixing such intelligent
assistants when they learn incorrect behavior, however, has received only
limited attention. To directly support end-user "debugging" of assistant
behaviors learned via statistical machine learning, we present a Why-oriented
approach which allows users to ask questions about how the assistant made its
predictions, provides answers to these "why" questions, and allows users to
interactively change these answers to debug the assistant's current and future
predictions. To understand the strengths and weaknesses of this approach, we
then conducted an exploratory study to investigate barriers that participants
could encounter when debugging an intelligent assistant using our approach, and
the information those participants requested to overcome these barriers. To
help ensure the inclusiveness of our approach, we also explored how gender
differences played a role in understanding barriers and information needs. We
then used these results to consider opportunities for Why-oriented approaches
to address user barriers and information needs.
Fixing the program my computer learned: barriers for end users, challenges
for the machine
Demonstration based interfaces
/
Kulesza, Todd
/
Wong, Weng-Keen
/
Stumpf, Simone
/
Perona, Stephen
/
White, Rachel
/
Burnett, Margaret M.
/
Oberst, Ian
/
Ko, Andrew J.
Proceedings of the 2009 International Conference on Intelligent User
Interfaces
2009-02-08
p.187-196
Keywords: debugging, end-user programming, machine learning
© Copyright 2009 ACM
Summary: The results of a machine learning from user behavior can be thought of as a
program, and like all programs, it may need to be debugged. Providing ways for
the user to debug it matters, because without the ability to fix errors users
may find that the learned program's errors are too damaging for them to be able
to trust such programs. We present a new approach to enable end users to debug
a learned program. We then use an early prototype of our new approach to
conduct a formative study to determine where and when debugging issues arise,
both in general and also separately for males and females. The results suggest
opportunities to make machine-learned programs more effective tools.