Future of Human-Building Interaction
Workshop Summaries
/
Alavi, Hamed S.
/
Lalanne, Denis
/
Nembrini, Julien
/
Churchill, Elizabeth
/
Kirk, David
/
Moncur, Wendy
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.3408-3414
© Copyright 2016 ACM
Summary: In 2030, we will have a different interactive experience with our built
environments, at home, at work, and even in public urban spaces. This is
attributed to advancements in sensing and actuation systems that can integrate
into the building infrastructures, in symbiosis with the new environmental
concerns that call for new life, work, and mobility styles. This change,
whether gradual or sudden, evident or seamless, can have a remarkable impact on
our everyday experiences, and thus entails efforts to envision possible
scenarios and plan for them. We believe that buildings, as they would embody
our digital and physical interactive daily experiences, should be designed and
nurtured in a dialogue with their users at the individual as well as social
levels. This implies a responsibility of the HCI community to intervene and
involve the user in the Human-Building Interaction (HBI) design practice. We
propose bringing together expertise from the fields of human-computer
interaction, building and urban architecture, and social sciences, and provide
them with an occasion for collaboratively creating and sharing 'images' of HBI
by 2030. The goal is to uncover research opportunities and challenges that will
emerge through discussions and multi-faceted debates about the topics proposed.
Tangible Meets Gestural: Comparing and Blending Post-WIMP Interaction
Paradigms
Student Design Challenge
/
Angelini, Leonardo
/
Lalanne, Denis
/
van den Hoven, Elise
/
Mazalek, Ali
/
Khaled, Omar Abou
/
Mugellini, Elena
Proceedings of the 2015 International Conference on Tangible and Embedded
Interaction
2015-01-15
p.473-476
© Copyright 2015 ACM
Summary: More and more objects of our everyday environment are becoming smart and
connected, offering us new interaction possibilities. Tangible interaction and
gestural interaction are promising communication means with these objects in
this post-WIMP interaction era. Although based on different principles, they
both exploit our body awareness and our skills to provide a richer and more
intuitive interaction. Occasionally, when user gestures involve physical
artifacts, tangible interaction and gestural interaction can blend into a new
paradigm, i.e., tangible gesture interaction [5]. This workshop fosters the
comparison among these different interaction paradigms and offers a unique
opportunity to discuss their analogies and differences, as well as the
definitions, boundaries, strengths, application domains and perspectives of
tangible gesture interaction. Participants from different backgrounds are
invited.
Towards an Anthropomorphic Lamp for Affective Interaction
Work-in-Progress: Poster/Demo Presentations
/
Angelini, Leonardo
/
Caon, Maurizio
/
Lalanne, Denis
/
khaled, Omar Abou
/
Mugellini, Elena
Proceedings of the 2015 International Conference on Tangible and Embedded
Interaction
2015-01-15
p.661-666
© Copyright 2015 ACM
Summary: This paper presents the concept of a lamp that allows displaying and
collecting user's emotional states. In particular, it displays the emotional
information changing colors and facial expressions; in fact, the lamp is
characterized by anthropomorphic form and behavior in order to make the
interaction more natural and spontaneous. The user can interact with the lamp
through tangible gestures typically used in social interactions by humans. Two
different scenarios involving the use of the lamp as a companion and for
computer-mediated communication are presented.
An anthropomorphic lamp for the communication of emotions
Travaux en cours (TeC)
/
Angelini, Leonardo
/
Caon, Maurizio
/
Lalanne, Denis
/
Khaled, Omar Abou
/
Mugellini, Elena
Proceedings of the 2014 Conference of the Association Francophone
d'Interaction Homme-Machine
2014-10-28
p.207-212
© Copyright 2014 ACM
Summary: This article presents the design of a lamp that is able to represent and
collect users' emotional states through a multimodal interaction based on
tangible gestures on the users' side, and colors and facial expressions on the
lamp side. In particular, the lamp benefits of anthropomorphic form and
behavior in order to make the interaction more natural. Two application
scenarios are presented, as well as the implementation details of one of these
scenarios.
Gesturing on the Steering Wheel: a User-elicited taxonomy
Poster Presentations
/
Angelini, Leonardo
/
Carrino, Francesco
/
Carrino, Stefano
/
Caon, Maurizio
/
Khaled, Omar Abou
/
Baumgartner, Jürgen
/
Sonderegger, Andreas
/
Lalanne, Denis
/
Mugellini, Elena
AutomotiveUI 2014: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications
2014-09-17
v.1
n.8 pages
p.31
© Copyright 2014 ACM
Summary: "Eyes on the road, hands on the wheel" is a crucial principle to be taken
into account designing interactions for current in-vehicle interfaces. Gesture
interaction is a promising modality that can be implemented following this
principle in order to reduce driver distraction and increase safety. We present
the results of a user elicitation for gestures performed on the surface of the
steering wheel. We asked to 40 participants to elicit 6 gestures, for a total
of 240 gestures. Based on the results of this experience, we derived a taxonomy
of gestures performed on the steering wheel. The analysis of the results offers
useful suggestions for the design of in-vehicle gestural interfaces based on
this approach.
Hugginess: encouraging interpersonal touch through smart clothes
Workshop on Atelier of Smart Garments and Accessories (ASGA)
/
Angelini, Leonardo
/
Khaled, Omar Abou
/
Caon, Maurizio
/
Mugellini, Elena
/
Lalanne, Denis
Adjunct Proceedings of the 2014 International Symposium on Wearable
Computers
2014-09-13
v.2
p.155-162
© Copyright 2014 ACM
Summary: Physical contact has an important role in human well-being. In this paper,
we present Hugginess, a concept of interactive system that encourages people to
hug by augmenting this gesture with digital information exchange. As a proof of
concept, we developed two t-shirts that reciprocally send information to the
hugged person through the conductive fabric.
A Survey of Datasets for Human Gesture Recognition
Gesture, Gaze and Activity Recognition
/
Ruffieux, Simon
/
Lalanne, Denis
/
Mugellini, Elena
/
Khaled, Omar Abou
HCI International 2014: 16th International Conference on HCI, Part II:
Advanced Interaction Modalities and Techniques
2014-06-22
v.2
p.337-348
Keywords: human-computer interaction; gesture recognition; datasets; survey
© Copyright 2014 Springer International Publishing
Summary: This paper presents a survey on datasets created for the field of gesture
recognition. The main characteristics of the datasets are presented on two
tables to provide researchers a clear and rapid access to the information. This
paper also provides a comprehensive description of the datasets and discusses
their general strengths and limitations. Guidelines for creation and selection
of datasets for gesture recognition are proposed. This survey should be a
key-access point for researchers looking to create or use datasets in the field
of human gesture recognition.
Improving In-game Gesture Learning with Visual Feedback
Interacting with Games
/
Schwaller, Matthias
/
Kühni, Jan
/
Angelini, Leonardo
/
Lalanne, Denis
HCI International 2014: 16th International Conference on HCI, Part III:
Applications and Services
2014-06-22
v.3
p.643-653
Keywords: Gestural interfaces; User evaluation; In-game Feedback; Accelerometer
© Copyright 2014 Springer International Publishing
Summary: This paper presents a research work on gesture recognition and feedback to
reduce the learning time of new gestures and to augment user performance in a
game application. A Wiimote controlled space shooter game, GeStar Wars, has
been developed. The player controls a spaceship through the buttons in the
controller, while forearm gestures can be used to perform special actions.
Gesture strokes are mapped in a 3x3 grid and are differentiated according to
the path of the covered grid cells. In-game visual feedback displays to the
user the current gesture path and which cells were covered after the gesture is
performed. The novelty of this research resides in the correlated gesture
recognition methodology and feedback which helps the user to learn and correct
the gestures. The evaluation, conducted with 12 users, showed that the users
performed significantly better if feedback was provided.
ChAirGest: a challenge for multimodal mid-air gesture recognition for close
HCI
ChaLearn challenge and workshop on multi-modal gesture recognition
/
Ruffieux, Simon
/
Lalanne, Denis
/
Mugellini, Elena
Proceedings of the 2013 International Conference on Multimodal Interaction
2013-12-09
p.483-488
© Copyright 2013 ACM
Summary: In this paper, we present a research oriented open challenge focusing on
multimodal gesture spotting and recognition from continuous sequences in the
context of close human-computer interaction. We contextually outline the added
value of the proposed challenge by presenting most recent and popular
challenges and corpora available in the field. Then we present the procedures
for data collection, corpus creation and the tools that have been developed for
participants. Finally we introduce a novel single performance metric that has
been developed to quantitatively evaluate the spotting and recognition task
with multiple sensors.
Opportunistic synergy: a classifier fusion engine for micro-gesture
recognition
Interaction techniques 1 -- gesturing
/
Angelini, Leonardo
/
Carrino, Francesco
/
Carrino, Stefano
/
Caon, Maurizio
/
Lalanne, Denis
/
Khaled, Omar Abou
/
Mugellini, Elena
AutomnotiveUI 2013: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications
2013-10-28
p.30-37
© Copyright 2013 ACM
Summary: In this paper, we present a novel opportunistic paradigm for in-vehicle
gesture recognition. This paradigm allows using two or more subsystems in a
synergistic manner: they can work in parallel but the lack of some of them does
not compromise the functioning of the whole system. In order to segment and
recognize micro-gestures performed by the user on the steering wheel, we
combine a wearable approach based on the electromyography of the user's forearm
muscles, with an environmental approach based on pressure sensors integrated
directly on the steering wheel. We present and analyze several fusion methods
and gesture segmentation strategies. A prototype has been developed and
evaluated with data from nine subjects. The results prove that the proposed
opportunistic system performs equal or better than each stand-alone subsystem
while increasing the interaction possibilities.
WheelSense: Enabling Tangible Gestures on the Steering Wheel for In-Car
Natural Interaction
In-Vehicle Interaction
/
Angelini, Leonardo
/
Caon, Maurizio
/
Carrino, Francesco
/
Carrino, Stefano
/
Lalanne, Denis
/
Khaled, Omar Abou
/
Mugellini, Elena
HCI International 2013: 15th International Conference on HCI, Part II:
Applications and Services
2013-07-21
v.2
p.531-540
Keywords: Tangible gestures; smart steering wheel; in-vehicle user interface; in-car
natural interaction
© Copyright 2013 Springer-Verlag
Summary: This paper presents WheelSense, a system for non-distracting and natural
interaction with the In-Vehicle Information and communication System (IVIS).
WheelSense embeds pressure sensors in the steering wheel in order to detect
tangible gestures that the driver can perform on its surface. In this
application, the driver can interact by means of four gestures that have been
designed to allow the execution of secondary tasks without leaving the hands
from the steering wheel. Thus, the proposed interface aims at minimizing the
distraction of the driver from the primary task. Eight users tested the
proposed system in an evaluation composed of three phases: gesture recognition
test, gesture recognition test while driving in a simulated environment and
usability questionnaire. The results show that the accuracy rate is 87% and 82%
while driving. The system usability scale scored 84 points out of 100.
Two Handed Mid-Air Gestural HCI: Point + Command
Gesture and Eye-Gaze Based Interaction
/
Schwaller, Matthias
/
Brunner, Simon
/
Lalanne, Denis
HCI International 2013: 15th International Conference on HCI, Part IV:
Interaction Modalities and Techniques
2013-07-21
v.4
p.388-397
Keywords: Gestural interfaces; Two-hand interaction; User evaluation
© Copyright 2013 Springer-Verlag
Summary: This paper presents work aimed at developing and evaluating various
two-handed mid-air gestures to operate a computer accurately and with little
effort. The main idea driving the design of these gestures is that one hand is
used for pointing, and the other hand for four standard commands: selection,
drag & drop, rotation and zoom. Two chosen gesture vocabularies are
compared in a user evaluation. The paper further presents a novel evaluation
methodology and the application developed to evaluate the four commands first
separately and then together. In our user evaluation, we found significant
differences for the rotation and zooming gestures. The iconic gesture
vocabulary had better performance and was better rated by the users than the
technological gesture vocabulary.
A Developer-Oriented Visual Model for Upper-Body Gesture Characterization
Computational Vision in HCI
/
Ruffieux, Simon
/
Lalanne, Denis
/
Khaled, Omar Abou
/
Mugellini, Elena
HCI International 2013: 15th International Conference on HCI, Part V:
Towards Intelligent and Implicit Interaction
2013-07-21
v.5
p.186-195
Keywords: natural interaction; human-computer interaction; multimodality;
visualization tools; developer-oriented
© Copyright 2013 Springer-Verlag
Summary: This paper focuses on a facilitated and intuitive representation of
upper-body gestures for developers. The representation is based on the user
motion parameters, particularly the rotational and translational components of
body segments during a gesture. The developed static representation aims to
provide a rapid visualization of the complexity for each body segment involved
in the gesture for static representations. The model and algorithms used to
produce the representation have been applied to a dataset of 10 representative
gestures to illustrate the model.
Computer-Supported Work in Partially Distributed and Co-located Teams: The
Influence of Mood Feedback
Human-Work Interaction Design
/
Sonderegger, Andreas
/
Lalanne, Denis
/
Bergholz, Luisa
/
Ringeval, Fabien
/
Sauer, Juergen
Proceedings of IFIP INTERACT'13: Human-Computer Interaction-2
2013
v.2
p.445-460
Keywords: virtual teamwork; videoconference; face-to-face; mood; computer-supported
cooperative work
© Copyright 2013 IFIP
Summary: This article examines the influence of mood feedback on different outcomes
of teamwork in two different collaborative work environments. Employing a 2 x 2
between-subjects design, mood feedback (present vs. not present) and
communication mode (face-to-face vs. video conferencing) were manipulated
experimentally. We used a newly developed collaborative communication
environment, called EmotiBoard, which is a large vertical interactive screen,
with which team members can interact in a face-to-face discussion or as a
spatially distributed team. To support teamwork, this tool provides visual
feedback of each team member's emotional state. Thirty-five teams comprising 3
persons each (with a confederate in each team) completed three different tasks,
measuring mood, performance, subjective workload, and team satisfaction.
Results indicated that the evaluation of the other team members' emotional
state was more accurate when the mood feedback was presented. In addition, mood
feedback influenced team performance positively in the video conference
condition and negatively in the face-to-face condition. Furthermore,
participants in the video conference condition were more satisfied after task
completion than participants in the face-to-face condition. Findings indicate
that the mood feedback tool is helpful for teams to gain a more accurate
understanding of team members' emotional states in different work situations.
Fusion in multimodal interactive systems: an HMM-based algorithm for
user-induced adaptation
Engineering 1
/
Dumas, Bruno
/
Signer, Beat
/
Lalanne, Denis
ACM SIGCHI 2012 Symposium on Engineering Interactive Computing Systems
2012-06-25
p.15-24
© Copyright 2012 ACM
Summary: Multimodal interfaces have shown to be ideal candidates for interactive
systems that adapt to a user either automatically or based on user-defined
rules. However, user-based adaptation demands for the corresponding advanced
software architectures and algorithms. We present a novel multimodal fusion
algorithm for the development of adaptive interactive systems which is based on
hidden Markov models (HMM). In order to select relevant modalities at the
semantic level, the algorithm is linked to temporal relationship properties.
The presented algorithm has been evaluated in three use cases from which we
were able to identify the main challenges involved in developing adaptive
multimodal interfaces.
A Fitt of distraction: measuring the impact of distracters and multi-users
on pointing efficiency
Works-in-progress
/
Lalanne, Denis
/
Masson, Agnes Lisowska
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.2
p.2125-2130
© Copyright 2011 ACM
Summary: This paper presents the results of an experiment aimed at measuring the
impact of the number of distracters and of co-located users on individual
pointing efficiency. The experiment, performed with 20 users, is a variation of
a Fitts' Law test in which we incrementally augmented the number of distracters
on the screen and the number of co-located users. The results show that the
number of distracters clearly influences users' pointing performance. Further,
it shows that users are more efficient at pointing items when they share the
display with co-located users than when they are alone.
PyGmI: creation and evaluation of a portable gestural interface
Short papers
/
Schwaller, Matthias
/
Lalanne, Denis
/
Khaled, Omar Abou
Proceedings of the Sixth Nordic Conference on Human-Computer Interaction
2010-10-16
p.773-776
Keywords: gestural interaction, portable user interface
© Copyright 2010 ACM
Summary: The Portable Gestural Interface PyGmI, which we implemented, is a smart tool
to interact with a system via simple hand gestures. The user wears some color
markers on his fingers and a webcam on his chest. The implemented prototype
permits to visualize and navigate into presentation files, thanks to a tiny
projector fixed on the user's belt. The gesture recognition uses color
segmentation, tracking and the Gesture and Activity Recognition Toolkit (GART).
This article presents PyGmI, its setup, the designed gestures, the recognition
modules, an application using it and finally an evaluation.
Gérer son information personnelle au moyen de la navigation par
facettes
Articles de recherche longs (Long Research Papers)
/
Evéquoz, Florian
/
Thomet, Julien
/
Lalanne, Denis
Proceedings of the 2010 Conference of the Association Francophone
d'Interaction Homme-Machine
2010-09-20
p.41-48
© Copyright 2010 ACM
Summary: This article introduces Weena, a personal information management (PIM)
system enabling faceted navigation in a personal collection. Re-finding
information items can be achieved in particular through the related people
(social facet) and temporal period (temporal facets) in addition to traditional
hierarchical browsing and text search. Participants in the evaluation
effectively used those facets and expressed an interest for the approach.
Faceted navigation is therefore a viable and promising alternative to
hierarchical browsing and text search, the two more traditional re-finding
means in PIM.
Fusion engines for multimodal input: a survey
Multimodal fusion (special session)
/
Lalanne, Denis
/
Nigay, Laurence
/
Palanque, philippe
/
Robinson, Peter
/
Vanderdonckt, Jean
/
Ladry, Jean-François
Proceedings of the 2009 International Conference on Multimodal Interfaces
2009-11-02
p.153-160
Keywords: fusion engine, interaction techniques, multimodal interfaces
© Copyright 2009 ACM
Summary: Fusion engines are fundamental components of multimodal inter-active
systems, to interpret input streams whose meaning can vary according to the
context, task, user and time. Other surveys have considered multimodal
interactive systems; we focus more closely on the design, specification,
construction and evaluation of fusion engines. We first introduce some
terminology and set out the major challenges that fusion engines propose to
solve. A history of past work in the field of fusion engines is then presented
using the BRETAM model. These approaches to fusion are then classified. The
classification considers the types of application, the fusion principles and
the temporal aspects. Finally, the challenges for future work in the field of
fusion engines are set out. These include software frameworks, quantitative
evaluation, machine learning and adaptation.
Benchmarking fusion engines of multimodal interactive systems
Multimodal fusion (special session)
/
Dumas, Bruno
/
Ingold, Rolf
/
Lalanne, Denis
Proceedings of the 2009 International Conference on Multimodal Interfaces
2009-11-02
p.169-176
Keywords: fusion engines evaluation, multimodal fusion, multimodal interfaces,
multimodal toolkit
© Copyright 2009 ACM
Summary: This article proposes an evaluation framework to benchmark the performance
of multimodal fusion engines. The paper first introduces different concepts and
techniques associated with multimodal fusion engines and further surveys recent
implementations. It then discusses the importance of evaluation as a mean to
assess fusion engines, not only from the user perspective, but also at a
performance level. The article further proposes a benchmark and a formalism to
build testbeds for assessing multimodal fusion engines. In its last section,
our current fusion engine and the associated system HephaisTK are evaluated
thanks to the evaluation framework proposed in this article. The article
concludes with a discussion on the proposed quantitative evaluation,
suggestions to build useful testbeds, and proposes some future improvements.
HephaisTK: a toolkit for rapid prototyping of multimodal interfaces
Demonstration session
/
Dumas, Bruno
/
Lalanne, Denis
/
Ingold, Rolf
Proceedings of the 2009 International Conference on Multimodal Interfaces
2009-11-02
p.231-232
Keywords: human-machine interaction, multimodal interfaces, multimodal toolkit
© Copyright 2009 ACM
Summary: This article introduces HephaisTK, a toolkit for rapid prototyping of
multimodal interfaces. After briefly discussing the state of the art, the
architecture traits of the toolkit are displayed, along with the major features
of HephaisTK: agent-based architecture, ability to plug in easily new input
recognizers, fusion engine and configuration by means of a SMUIML XML file.
Finally, applications created with the HephaisTK toolkit are discussed.
Démonstration: HephaisTK, une boîte à outils pour le
prototypage d'interfaces multimodales
Démonstrations
/
Dumas, Bruno
/
Lalanne, Denis
/
Ingold, Rolf
Proceedings of the 2008 Conference of the Association Francophone
d'Interaction Homme-Machine
2008-09-02
p.215-216
Keywords: human-machine interaction, multimodal fusion, multimodal interfaces,
multimodal toolkit
© Copyright 2008 ACM
Summary: This article describes HephaisTK, a toolkit for prototyping multimodal
interfaces. The article briefly presents the state of the art and the
challenges before describing the architecture of the HephaisTK toolkit, along
its description language. Finally, the article explains future works.
Strengths and weaknesses of software architectures for the rapid creation of
tangible and multimodal interfaces
Making tangible interaction work
/
Dumas, Bruno
/
Lalanne, Denis
/
Guinard, Dominique
/
Koenig, Reto
/
Ingold, Rolf
Proceedings of the 2nd International Conference on Tangible and Embedded
Interaction
2008-02-18
p.47-54
Keywords: multimodal and tangible interfaces, multimodal interaction, software
engineering
© Copyright 2008 ACM
Summary: This paper reviews the challenges associated with the development of
tangible and multimodal interfaces and exposes our experiences with the
development of three different software architectures to rapidly prototype such
interfaces. The article first reviews the state of the art, and further
compares existing systems with our approaches. Finally, the article stresses
the major issues associated with the development of toolkits allowing the
creation of multimodal and tangible interfaces, and presents our future
objectives.
FaericWorld: Browsing Multimedia Events Through Static Documents and Links
Web
/
Rigamonti, Maurizio
/
Lalanne, Denis
/
Ingold, Rolf
Proceedings of IFIP INTERACT'07: Human-Computer Interaction
2007-09-10
v.1
p.102-115
Keywords: Multimedia browsing; multimedia indexing; multimodal alignments; information
visualization; information retrieval; multimedia meetings archives
© Copyright 2007 IFIP
Summary: This paper describes a novel browsing paradigm, taking benefit of the
various types of links (e.g. thematic, temporal, references, etc.) that can be
automatically built between multimedia documents. This browsing paradigm can
help eliciting multimedia archives' hidden structures or expanding search
results to related media. The paper intend to present a novel model for
browsing any kind of multimedia archives and further focuses on an archive of
meetings recordings, in order to illustrate the advantage of our method to
perform cross-meetings and in general cross-documents browsing. First of all,
the structure of meeting datasets is presented, describing in particular the
media implied, the annotations used for cross-document linking and the major
mining techniques integrated in this work. Then, the paper presents at a glance
the visual browser we developed that combines searching and browsing by links.
Further, the performances of the actual system are discussed, i.e. the
automatic indexing and linking processes for the two different meeting corpora,
as well as the access and browsing performances. Finally, the paper presents
the major unsolved issues and our perspectives for future works.
Supporting Human Memory with Interactive Systems
Workshops
/
Lalanne, Denis
/
van den Hoven, Elise
Proceedings of the HCI'07 Conference on People and Computers XXI
2007-09-03
v.2
p.60
Summary: The major goal of this workshop is to explore how interactive systems can
support human memory, using novel technologies and innovative human/machine
interaction paradigms, such as tangible interaction. We believe this is
important since memory and attention are becoming critical resources for our
wellness, e.g. with regard to a continuously increasing information overload.
The goal of this workshop is not only to support personal information
management but also daily life activities, e.g. adapted to user preferences and
specific contexts. Where current multimedia search engines are designed for
large user communities and their applications, this workshop targets the
support of individual's personal memory in everyday life.