The Performance and Preference of Different Fingers and Chords for Pointing,
Dragging, and Object Transformation
Fingers and Technology
/
Goguey, Alix
/
Nancel, Mathieu
/
Casiez, Géry
/
Vogel, Daniel
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4250-4261
© Copyright 2016 ACM
Summary: The development of robust methods to identify which finger is causing each
touch point, called "finger identification," will open up a new input space
where interaction designers can associate system actions to different fingers.
However, relatively little is known about the performance of specific fingers
as single touch points or when used together in a "chord." We present empirical
results for accuracy, throughput, and subjective preference gathered in five
experiments with 48 participants exploring all 10 fingers and 7 two-finger
chords. Based on these results, we develop design guidelines for reasonable
target sizes for specific fingers and two-finger chords, and a relative ranking
of the suitability of fingers and two-finger chords for common multi-touch
tasks. Our work contributes new knowledge regarding specific finger and chord
performance and can inform the design of future interaction techniques and
interfaces utilizing finger identification.
Gunslinger: Subtle Arms-down Mid-air Interaction
Session 1B: Large Displays, Large Movements
/
Liu, Mingyu
/
Nancel, Mathieu
/
Vogel, Daniel
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.63-71
© Copyright 2015 ACM
Summary: We describe Gunslinger, a mid-air interaction technique using barehand
postures and gestures. Unlike past work, we explore a relaxed arms-down
position with both hands interacting at the sides of the body. It features
"hand-cursor" feedback to communicate recognized hand posture, command mode and
tracking quality; and a simple, but flexible hand posture recognizer. Although
Gunslinger is suitable for many usage contexts, we focus on integrating mid-air
gestures with large display touch input. We show how the Gunslinger form factor
enables an interaction language that is equivalent, coherent, and compatible
with large display touch input. A four-part study evaluates Midas Touch,
posture recognition feedback, pointing and clicking, and general usability.
Myopoint: Pointing and Clicking Using Forearm Mounted Electromyography and
Inertial Motion Sensors
Mid-Air Gestures and Interaction
/
Haque, Faizan
/
Nancel, Mathieu
/
Vogel, Daniel
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.3653-3656
© Copyright 2015 ACM
Summary: We describe a mid-air, barehand pointing and clicking interaction technique
using electromyographic (EMG) and inertial measurement unit (IMU) input from a
consumer armband device. The technique uses enhanced pointer feedback to convey
state, a custom pointer acceleration function tuned for angular inertial
motion, and correction and filtering techniques to minimize side-effects when
combining EMG and IMU input. By replicating a previous large display study
using a motion capture pointing technique, we show the EMG and IMU technique is
only 430 to 790 ms slower and has acceptable error rates for targets greater
than 48 mm. Our work demonstrates that consumer-level EMG and IMU sensing is
practical for distant pointing and clicking on large displays.
Clutching Is Not (Necessarily) the Enemy
Interacting with GUIs
/
Nancel, Mathieu
/
Vogel, Daniel
/
Lank, Edward
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.4199-4202
© Copyright 2015 ACM
Summary: Clutching is usually assumed to be triggered by a lack of physical space and
detrimental to pointing performance. We conduct a controlled experiment using a
laptop trackpad where the effect of clutching on pointing performance is
dissociated from the effects of control-to-display transfer functions.
Participants performed a series of target acquisition tasks using typical
cursor acceleration functions with and without clutching. All pointing tasks
were feasible without clutching, but clutch-less movements were harder to
perform, caused more errors, required more preparation time, and were not
faster than clutch-enabled movements.
Causality: a conceptual model of interaction history
User models and prediction
/
Nancel, Mathieu
/
Cockburn, Andy
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.1777-1786
© Copyright 2014 ACM
Summary: Simple history systems such as Undo and Redo permit retrieval of earlier or
later interaction states, but advanced systems allow powerful capabilities to
reuse or reapply combinations of commands, states, or data across interaction
contexts. Whether simple or powerful, designing interaction history mechanisms
is challenging. We begin by reviewing existing history systems and models,
observing a lack of tools to assist designers and researchers in specifying,
contemplating, combining, and communicating the behaviour of history systems.
To resolve this problem, we present CAUSALITY, a conceptual model of
interaction history that clarifies the possibilities for temporal interactions.
The model includes components for the work artifact (such as the text and
formatting of a Word document), the system context (such as the settings and
parameters of the user interface), the linear timeline (the commands executed
in real time), and the branching chronology (a structure of executed commands
and their impact on the artifact and/or context, which may be navigable by the
user). We then describe and exemplify how this model can be used to encapsulate
existing user interfaces and reveal limitations in their behaviour, and we also
show in a conceptual evaluation how the model stimulates the design of new and
innovative opportunities for interacting in time.
High-precision pointing on large wall displays using small handheld devices
Papers: large and public displays
/
Nancel, Mathieu
/
Chapuis, Olivier
/
Pietriga, Emmanuel
/
Yang, Xing-Dong
/
Irani, Pourang P.
/
Beaudouin-Lafon, Michel
Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems
2013-04-27
v.1
p.831-840
© Copyright 2013 ACM
Summary: Rich interaction with high-resolution wall displays is not limited to
remotely pointing at targets. Other relevant types of interaction include
virtual navigation, text entry, and direct manipulation of control widgets.
However, most techniques for remotely acquiring targets with high precision
have studied remote pointing in isolation, focusing on pointing efficiency and
ignoring the need to support these other types of interaction. We investigate
high-precision pointing techniques capable of acquiring targets as small as 4
millimeters on a 5.5 meters wide display while leaving up to 93% of a typical
tablet device's screen space available for task-specific widgets. We compare
these techniques to state-of-the-art distant pointing techniques and show that
two of our techniques, a purely relative one and one that uses head
orientation, perform as well or better than the best pointing-only input
techniques while using a fraction of the interaction resources.
Body-centric design space for multi-surface interaction
Papers: full-body interaction
/
Wagner, Julie
/
Nancel, Mathieu
/
Gustafson, Sean G.
/
Huot, Stephane
/
Mackay, Wendy E.
Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems
2013-04-27
v.1
p.1299-1308
© Copyright 2013 ACM
Summary: We introduce BodyScape, a body-centric design space that allows us to
describe, classify and systematically compare multi-surface interaction
techniques, both individually and in combination. BodyScape reflects the
relationship between users and their environment, specifically how different
body parts enhance or restrict movement within particular interaction
techniques and can be used to analyze existing techniques or suggest new ones.
We illustrate the use of BodyScape by comparing two free-hand techniques,
on-body touch and mid-air pointing, first separately, then combined. We found
that touching the torso is faster than touching the lower legs, since it
affects the user's balance; and touching targets on the dominant arm is slower
than targets on the torso because the user must compensate for the applied
force.
Rapid development of user interfaces on cluster-driven wall displays with
jBricks
Interaction with large screens
/
Pietriga, Emmanuel
/
Huot, Stéphane
/
Nancel, Mathieu
/
Primet, Romain
ACM SIGCHI 2011 Symposium on Engineering Interactive Computing Systems
2011-06-13
p.185-190
© Copyright 2011 ACM
Summary: Research on cluster-driven wall displays has mostly focused on techniques
for parallel rendering of complex 3D models. There has been comparatively
little research effort dedicated to other types of graphics and to the software
engineering issues that arise when prototyping novel interaction techniques or
developing full-featured applications for such displays. We present jBricks, a
Java toolkit that integrates a high-quality 2D graphics rendering engine and a
versatile input configuration module into a coherent framework, enabling the
exploratory prototyping of interaction techniques and rapid development of
post-WIMP applications running on cluster-driven interactive visualization
platforms.
Mid-air pan-and-zoom on wall-sized displays
Mid-air pointing & gestures
/
Nancel, Mathieu
/
Wagner, Julie
/
Pietriga, Emmanuel
/
Chapuis, Olivier
/
Mackay, Wendy
Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems
2011-05-07
v.1
p.177-186
© Copyright 2011 ACM
Summary: Very-high-resolution wall-sized displays offer new opportunities for
interacting with large data sets. While pointing on this type of display has
been studied extensively, higher-level, more complex tasks such as pan-zoom
navigation have received little attention. It thus remains unclear which
techniques are best suited to perform multiscale navigation in these
environments. Building upon empirical data gathered from studies of
pan-and-zoom on desktop computers and studies of remote pointing, we identified
three key factors for the design of mid-air pan-and-zoom techniques: uni- vs.
bimanual interaction, linear vs. circular movements, and level of guidance to
accomplish the gestures in mid-air. After an extensive phase of iterative
design and pilot testing, we ran a controlled experiment aimed at better
understanding the influence of these factors on task performance. Significant
effects were obtained for all three factors: bimanual interaction, linear
gestures and a high level of guidance resulted in significantly improved
performance. Moreover, the interaction effects among some of the dimensions
suggest possible combinations for more complex, real-world tasks.
Un espace de conception fondé sur une analyse morphologique des
techniques de menus
Classons pour comprendre, comparer, trouver
/
Nancel, Mathieu
/
Huot, Stéphane
/
Beaudouin-Lafon, Michel
Proceedings of the 2009 Conference of the Association Francophone
d'Interaction Homme-Machine
2009-10-13
p.13-22
Keywords: design space, menus, morphological analysis
© Copyright 2009 ACM
Summary: This paper presents a design space based on a morphological analysis of menu
techniques. The goal of this design space is to facilitate the exploration of
novel menu designs, in particular to increase menu capacity without sacrificing
performance. The paper demonstrates the generative aspect of this design space
with four new menu designs based on poorly explored combinations of input
dimensions. For two of these four designs, the paper presents controlled
experiments that show that they perform on a par with other menus from the
literature.