Human-Centred Machine Learning
Workshop Summaries
/
Gillies, Marco
/
Fiebrink, Rebecca
/
Tanaka, Atau
/
Garcia, Jérémie
/
Bevilacqua, Frédéric
/
Heloir, Alexis
/
Nunnari, Fabrizio
/
Mackay, Wendy
/
Amershi, Saleema
/
Lee, Bongshin
/
d'Alessandro, Nicolas
/
Tilmanne, Joëlle
/
Kulesza, Todd
/
Caramiaux, Baptiste
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3558-3565
© Copyright 2016 ACM
Summary: Machine learning is one of the most important and successful techniques in
contemporary computer science. It involves the statistical inference of models
(such as classifiers) from data. It is often conceived in a very impersonal
way, with algorithms working autonomously on passively collected data. However,
this viewpoint hides considerable human work of tuning the algorithms,
gathering the data, and even deciding what should be modeled in the first
place. Examining machine learning from a human-centered perspective includes
explicitly recognising this human work, as well as reframing machine learning
workflows based on situated human working practices, and exploring the
co-adaptation of humans and systems. A human-centered understanding of machine
learning in human context can lead not only to more usable machine learning
tools, but to new ways of framing learning computationally. This workshop will
bring together researchers to discuss these issues and suggest future research
questions aimed at creating a human-centered approach to machine learning.
Adaptive hand-tracked system for 3D authoring
Techniques d'interaction: dimensions > 2
/
Heloir, Alexis
/
Nunnari, Fabrizio
/
Kolski, Christophe
Proceedings of the 2014 Conference of the Association Francophone
d'Interaction Homme-Machine
2014-10-28
p.101-104
© Copyright 2014 ACM
Summary: We present the interaction design and the component architecture of an
adaptive authoring system based on a consumer-range 3D input device. We claim
that this system can help both novice and experienced users performing
authoring tasks in a 3D authoring environment. The system uses a keyboardless
self-adaptive interaction controller built upon a rule-based system that learns
and infers the user's behavior/condition on the fly according to her actions;
rearranging rules when necessary and suggesting breaks to avoid performance
drops caused by fatigue or the so-called gorilla-arm effect.
Assessing the deaf user perspective on sign language avatars
Sign language comprehension
/
Kipp, Michael
/
Nguyen, Quan
/
Heloir, Alexis
/
Matthes, Silke
Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies
2011-10-24
p.107-114
© Copyright 2011 ACM
Summary: Signing avatars have the potential to become a useful and even
cost-effective method to make written content more accessible for Deaf people.
However, avatar research is characterized by the fact that most researchers are
not members of the Deaf community, and that Deaf people as potential users have
little or no knowledge about avatars. Therefore, we suggest two well-known
methods, focus groups and online studies, as a two-way information exchange
between research and the Deaf community. Our aim was to assess signing avatar
acceptability, shortcomings of current avatars and potential use cases. We
conducted two focus group interviews (N=8) and, to quantify important issues,
created an accessible online user study (N=317). This paper deals with both the
methodology used and the elicited opinions and criticism. While we found a
positive baseline response to the idea of signing avatars, we also show that
there is a statistically significant increase in positive opinion caused by
participating in the studies. We argue that inclusion of Deaf people on many
levels will foster acceptance as well as provide important feedback regarding
key aspects of avatar technology that need to be improved.
Requirements for a Gesture Specification Language -- A Comparison of Two
Representation Formalisms
Gesture Simulation
/
Heloir, Alexis
/
Kipp, Michael
GW 2009: Gesture Workshop
2009-02-25
p.207-218
Keywords: embodied conversational agents; gesture description language; comparative
study
© Copyright 2009 Springer-Verlag
Summary: We present a comparative study of two gesture specification languages. Our
aim is to derive requirements for a new, optimal specification language that
can be used to extend the emerging BML standard. We compare MURML, which has
been designed to specify coverbal gestures, and a language we call LV,
originally designed to describe French Sign Language utterances. As a first
step toward a new gesture specification language we created EMBRScript, a
low-level animation language capable of describing multi-channel animations,
that can be used as a foundation for future BML extensions.
A Qualitative and Quantitative Characterisation of Style in Sign Language
Gestures
Sign Language Processing
/
Heloir, Alexis
/
Gibet, Sylvie
GW 2007: Gesture Workshop
2007-05-23
p.122-133
© Copyright 2007 Springer-Verlag
Summary: This paper addresses the identification and representation of the variations
induced by style for the synthesis of realistic and convincing expressive sign
language gesture sequences. A qualitative and quantitative comparison of styled
gesture sequences is made. This comparison leads to the identification of
temporal, spatial, and structural processes that are described in a theoretical
model of sign language phonology. Insights raised by this study are then
considered in the more general framework of gesture synthesis in order to
enhance existing gesture specification systems.
Captured Motion Data Processing for Real Time Synthesis of Sign Language
Gesture Analysis
/
Heloir, Alexis
/
Gibet, Sylvie
/
Multon, Franck
/
Courty, Nicolas
GW 2005: Gesture Workshop
2005-05-18
p.168-171
© Copyright 2005 Springer-Verlag
Summary: This study proposes a roadmap for the creation and specification of a
virtual humanoid capable of performing expressive gestures in real time. We
present a gesture motion data acquisition protocol capable of handling the main
articulators involved in human expressive gesture (whole body, fingers and
face). The focus is then shifted to the postprocessing of captured data leading
to a motion database complying with our motion specification language and
capable of feeding data driven animation techniques.