Force-enabled TouchPad in Cars: Improving Target Selection using Absolute
Input
Late-Breaking Works: Novel Interactions
/
Sheik-Nainar, Mohamed
/
Huber, Jochen
/
Bose, Raja
/
Matic, Nada
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.2697-2704
© Copyright 2016 ACM
Summary: Current automotive interfaces rely heavily on touchscreen interfaces which
leverage simple and intuitive direct touch interaction. Since input and output
are co-located, displays have to be positioned within hand's reach. When the
display is outside the reach envelope, a touchpad has been used as a control
device. Current implementation of touchpads in cars rely on a relative input
method that requires a visual cursor and is known to cause distraction from
primary driving task. Newer touchpads with force sensing ability are getting
introduced in notebook computers. We propose to use a force-enabled touchpad
with absolute mapping for target selection. We performed a controlled
experiment as a first step towards assessing whether absolute mapped force
input target selection performance can be comparable to direct touch input.
Results show that target selection performance is not significantly different
from direct touch input making a case for force-enabled touchpad usage in
scenarios where the display is outside the reach envelope.
FingerReader: A Wearable Device to Explore Printed Text on the Go
Accessibility at Home & on The Go
/
Shilkrot, Roy
/
Huber, Jochen
/
Ee, Wong Meng
/
Maes, Pattie
/
Nanayakkara, Suranga Chandima
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.2363-2372
© Copyright 2015 ACM
Summary: Accessing printed text in a mobile context is a major challenge for the
blind. A preliminary study with blind people reveals numerous difficulties with
existing state-of-the-art technologies including problems with alignment,
focus, accuracy, mobility and efficiency. In this paper, we present a
finger-worn device, FingerReader, that assists blind users with reading printed
text on the go. We introduce a novel computer vision algorithm for
local-sequential text scanning that enables reading single lines, blocks of
text or skimming the text with complementary, multimodal feedback. This system
is implemented in a small finger-worn form factor, that enables a more
manageable eyes-free operation with trivial setup. We offer findings from three
studies performed to determine the usability of the FingerReader.
SmartObjects: Fourth Workshop on Interacting with Smart Objects
Workshops
/
Schnelle-Walka, Dirk
/
Mühlhäuser, Max
/
Radomski, Stefan
/
Brdiczka, Oliver
/
Huber, Jochen
/
Luyten, Kris
/
Grosse-Puppendahl, Tobias
Proceedings of the 2015 International Conference on Intelligent User
Interfaces
2015-03-29
v.1
p.453-454
© Copyright 2015 ACM
Summary: The increasing number of smart objects in our everyday life shapes how we
interact beyond the desktop. In this workshop we discussed how the interaction
with these smart objects should be designed from various perspectives. This
year's workshop put a special focus on affective computing with smart objects,
as reflected by the keynote talk.
Body as display: augmenting the face through transillumination
Posters & Demonstrations
/
Wessolek, Daniel
/
Huber, Jochen
/
Maes, Pattie
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.193-194
© Copyright 2015 ACM
Summary: In this paper we describe our explorations of the design space offered by
augmenting parts of the human face, in this case, the ears. Using
light-emitting add-ons behind the ears we aim to enhance social interactions.
Scenarios range from indirect notifications of events, messaging directed to
the wearer but communicated via a person face to face, or adding information
regarding the internal state of the wearer, like loudness discomfort levels,
concentration fatigue, or emotional strain levels.
Feel & see the globe: a thermal, interactive installation
Posters & Demonstrations
/
Huber, Jochen
/
Malavipathirana, Hasantha
/
Wang, Yikun
/
Li, Xinyu
/
Fu, Jody C.
/
Maes, Pattie
/
Nanayakkara, Suranga
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.215-216
© Copyright 2015 ACM
Summary: "Feel & See the Globe" is a thermal, interactive installation. The
central idea is to map temperature information in regions around the world from
prehistoric, modern to futuristic times onto a low fidelity display. The
display visually communicates global temperature rates and lets visitors
experience the temperature physically through a tangible, thermal artifact. A
pertinent educational aim is to inform and teach about global warming.
Towards effective interaction with omnidirectional videos using immersive
virtual reality headsets
Posters & Demonstrations
/
Petry, Benjamin
/
Huber, Jochen
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.217-218
© Copyright 2015 ACM
Summary: Omnidirectional videos (ODV), also known as panoramic videos, are an
emerging, new kind of media. ODVs are typically recorded with cameras that
cover up to 360° of the recorded scene. Due to the limitation of the human
vision, ODVs cannot be viewed as-is. There is a larger body of work that
focuses on browsing ODVs on ordinary 2D displays, e.g. on an LCD using a
desktop computer or on a smartphone. In this demonstration paper, we present a
new approach for ODV browsing using an immersive, head-mounted system. The
novelty of our implementation lies in decoupling navigation in time from
navigation in space: navigation in time is mapped to gesture-based interactions
and navigation in space is mapped to head movements. We argue that this enables
more natural ways of interacting with ODVs.
SparKubes: exploring the interplay between digital and physical spaces with
minimalistic interfaces
Physical -- virtual
/
Ortega-Avila, Santiago
/
Huber, Jochen
/
Janaka, Nuwan
/
Withana, Anusha
/
Fernando, Piyum
/
Nanayakkara, Suranga
Proceedings of the 2014 Australian Computer-Human Interaction Conference
2014-12-02
p.204-207
© Copyright 2014 ACM
Summary: Tangible objects have seen an ongoing integration into real-world settings,
e.g. in the classroom. These objects allow for instance learners to explore
digital content in the physical space and leverage the physicality of the
interface for spatial interaction. In this paper, we present SparKubes, a set
of stand-alone tangible objects that are corded with simple behaviors and do
not require additional instrumentation or setup. This overcomes a variety of
issues such as setting up network connection and instrumentation of the
environment -- as long as a SparKube sees another, it "works". The contribution
of this paper is three-fold: we (1) present the implementation of a
minimalistic tangible platform as the basis for SparKube, (2) depict the design
space that covers a variety of interaction primitives and (3) show how these
primitives can be combined to create and manipulate SparKube interfaces in the
scope of two salient application scenarios: tangible widgets and the
manipulation of information flow.
EarPut: augmenting ear-worn devices for ear-based interaction
User experience
/
Lissermann, Roman
/
Huber, Jochen
/
Hadjakos, Aristotelis
/
Nanayakkara, Suranga
/
Mühlhäuser, Max
Proceedings of the 2014 Australian Computer-Human Interaction Conference
2014-12-02
p.300-307
© Copyright 2014 ACM
Summary: One of the pervasive challenges in mobile interaction is decreasing the
visual demand of interfaces towards eyes-free interaction. In this paper, we
focus on the unique affordances of the human ear to support one-handed and
eyes-free mobile interaction. We present EarPut, a novel interface concept and
hardware prototype, which unobtrusively augments a variety of accessories that
are worn behind the ear (e.g. headsets or glasses) to instrument the human ear
as an interactive surface. The contribution of this paper is three-fold. We
contribute (i) results from a controlled experiment with 27 participants,
providing empirical evidence that people are able to target salient regions on
their ear effectively and precisely, (ii) a first, systematically derived
design space for ear-based interaction and (iii) a set of proof of concept
EarPut applications that leverage on the design space and embrace mobile media
navigation, mobile gaming and smart home interaction.
LODE: Linking digital humanities content to the web of data
Other
/
Sztyler, Timo
/
Huber, Jakob
/
Noessner, Jan
/
Murdock, Jaimie
/
Allen, Colin
/
Niepert, Mathias
JCDL'14: Proceedings of the 2014 ACM/IEEE-CS Joint Conference on Digital
Libraries
2014-09-08
p.423-424
Keywords: Browsers
Keywords: Cognitive science
Keywords: Educational institutions
Keywords: Joining processes
Keywords: Resource description framework
Keywords: Search engines
Keywords: Standards
© Copyright 2014 IEEE
Summary: Numerous digital libraries projects maintain their data collections in the
form of text, images, and metadata. While data may be stored in many formats,
from plain text to XML to relational databases, the use of the resource
description framework (RDF) as a standardized representation has gained
considerable traction during the last five years. Almost every digital
humanities meeting has at least one session concerned with the topic of digital
humanities, RDF, and linked data, including JCDL. While most existing work in
linked data has focused on improving algorithms for entity matching, the aim of
our Linked Open Data Enhancer Lode is to work "out of the box", enabling their
use by humanities scholars, computer scientists, librarians, and information
scientists alike. With Lode we enable non-technical users to enrich a local RDF
repository with high-quality data from the Linked Open Data cloud. Lode links
and enhances the local RDF repository without reducing the quality of the data.
In particular, we support the user in the enhancement and linking process by
providing intuitive user-interfaces and by suggesting high quality linking
candidates using state of the art matching algorithms. We hope that the Lode
framework will be useful to digital humanities scholars complementing other
digital humanities tools.
PalmRC: leveraging the palm surface as an imaginary eyes-free television
remote control
/
Dezfuli, Niloofar
/
Khalilbeigi, Mohammadreza
/
Huber, Jochen
/
Özkorkmaz, Murat
/
Mühlhäuser, Max
Behaviour and Information Technology
2014-08-03
v.33
n.8
p.829-843
© Copyright 2014 Taylor and Francis
Summary: User input on television (TV) typically requires a mediator device such as a
handheld remote control. While this is a well-established interaction paradigm,
a handheld device has serious drawbacks: it can be easily misplaced due to its
mobility and in case of a touch screen interface, it also requires additional
visual attention. Emerging interaction paradigms such as 3D mid-air gestures
using novel depth sensors (e.g. Microsoft Kinect), aim at overcoming these
limitations, but are known to be tiring. In this article, we propose to
leverage the palm as an interactive surface for TV remote control. We present
three user studies which set the base for our four contributions: We (1)
qualitatively explore the conceptual design space of the proposed imaginary
palm-based remote control in an explorative study, (2) quantitatively
investigate the effectiveness and accuracy of such an interface in a controlled
experiment, (3) identified user acceptance in a controlled laboratory
evaluation comparing PalmRC concept with two most typical existing input
modalities, here conventional remote control and touch-based remote control
interfaces on smart phones for their user experience, task load, as well as
overall preference, and (4) contribute PalmRC, an eyes-free, palm-surface-based
TV remote control. Our results show that the palm has the potential to be
leveraged for device-less eyes-free TV remote interaction without any
third-party mediator device.
Permulin: mixed-focus collaboration on multi-view tabletops
Head-worn displays
/
Lissermann, Roman
/
Huber, Jochen
/
Schmitz, Martin
/
Steimle, Jürgen
/
Mühlhäuser, Max
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3191-3200
© Copyright 2014 ACM
Summary: We contribute Permulin, an integrated set of interaction and visualization
techniques for multi-view tabletops to support co-located collaboration across
a wide variety of collaborative coupling styles. These techniques (1) provide
support both for group work and for individual work, as well as for the
transitions in-between, (2) contribute sharing and peeking techniques to
support mutual awareness and group coordination during phases of individual
work, (3) reduce interference during group work on a group view, and (4)
directly integrate with conventional multi-touch input. We illustrate our
techniques in a proof-of-concept implementation with the two example
applications of map navigation and photo collages. Results from two user
studies demonstrate that Permulin supports fluent transitions between
individual and group work and exhibits unique awareness properties that allow
participants to be highly aware of each other during tightly coupled
collaboration, while being able to unobtrusively perform individual work during
loosely coupled collaboration.
Workshop on assistive augmentation
Workshop summaries
/
Huber, Jochen
/
Rekimoto, Jun
/
Inami, Masahiko
/
Shilkrot, Roy
/
Maes, Pattie
/
Ee, Wong Meng
/
Pullin, Graham
/
Nanayakkara, Suranga Chandima
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.103-106
© Copyright 2014 ACM
Summary: Our senses are the dominant channel for perceiving the world around us, some
more central than the others, such as the sense of vision. Whether they have
impairments or not, people often find themselves at the edge of sensorial
capability and seek assistive or enhancing devices. We wish to put sensorial
ability and disability on a continuum of usability for certain technology,
rather than treat one or the other extreme as the focus.
The overarching topic of the workshop proposed here is the design and
development of assistive technology, user interfaces and interactions that
seamlessly integrate with a user's mind, body and behavior, providing an
enhanced perception. We call this "Assistive Augmentation".
The workshop aims to establish conversation and idea exchange with
researchers and practitioners at the junction of human-computer interfaces,
assistive technology and human augmentation. The workshop will serve as a hub
for the emerging community of assistive augmentation researchers.
A wearable text-reading device for the visually-impaired
Video showcase presentations
/
Shilkrot, Roy
/
Huber, Jochen
/
Liu, Connie
/
Maes, Pattie
/
Nanayakkara, Suranga Chandima
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.193-194
© Copyright 2014 ACM
Summary: Visually impaired people report numerous difficulties with accessing printed
text using existing technology, including problems with alignment, focus,
accuracy, mobility and efficiency. We present a finger worn device, which
contains a camera, vibration motors and a microcontroller, that assists the
visually impaired with effectively and efficiently reading paper-printed text
in a manageable operation with little setup. We introduce a novel,
local-sequential manner for scanning text which enables reading single lines,
blocks of text or skimming the text for important sections while providing
real-time auditory and tactile feedback.
FingerReader: a wearable device to support text reading on the go
Works-in-progress
/
Shilkrot, Roy
/
Huber, Jochen
/
Liu, Connie
/
Maes, Pattie
/
Nanayakkara, Suranga Chandima
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.2359-2364
© Copyright 2014 ACM
Summary: Visually impaired people report numerous difficulties with accessing printed
text using existing technology, including problems with alignment, focus,
accuracy, mobility and efficiency. We present a finger worn device that assists
the visually impaired with effectively and efficiently reading paper-printed
text. We introduce a novel, local-sequential manner for scanning text which
enables reading single lines, blocks of text or skimming the text for important
sections while providing real-time auditory and tactile feedback. The design is
motivated by preliminary studies with visually impaired people, and it is
small-scale and mobile, which enables a more manageable operation with little
setup.
SpiderVision: extending the human field of view for augmented awareness
8. Super Perception
/
Fan, Kevin
/
Huber, Jochen
/
Nanayakkara, Suranga
/
Inami, Masahiko
Proceedings of the 2014 Augmented Human International Conference
2014-03-07
p.47
© Copyright 2014 ACM
Summary: We present SpiderVision, a wearable device that extends the human field of
view to augment a user's awareness of things happening behind one's back.
SpiderVision leverages a front and back camera to enable users to focus on the
front view while employing intelligent interface techniques to cue the user
about activity in the back view. The extended back view is only blended in when
the scene captured by the back camera is analyzed to be dynamically changing,
e.g. due to object movement. We explore factors that affect the blended
extension, such as view abstraction and blending area. We contribute results of
a user study that explore 1) whether users can perceive the extended field of
view effectively, and 2) whether the extended field of view is considered a
distraction. Quantitative analysis of the users' performance and qualitative
observations of how users perceive the visual augmentation are described.
SmartObjects: third workshop on interacting with smart objects
Workshop summaries
/
Schnelle-Walka, Dirk
/
Huber, Jochen
/
Radomski, Stefan
/
Brdiczka, Oliver
/
Luyten, Kris
/
Mühlhäuser, Max
Companion Proceedings of the 2014 International Conference on Intelligent
User Interfaces
2014-02-24
v.2
p.45-46
© Copyright 2014 ACM
Summary: The increasing number of smart objects in our everyday life shapes how we
interact beyond the desktop. In this workshop we discuss how interaction with
these smart objects should be designed from various perspectives.
CoStream: co-construction of shared experiences through mobile live video
sharing
Innovative interaction
/
Dezuli, Niloofar
/
Huber, Jochen
/
Churchill, Elizabeth F.
/
Mühlhäuser, Max
Proceedings of the 27th BCS International Conference on Human-Computer
Interaction
2013-09-09
p.6
© Copyright 2013 Authors
Summary: Mobile media sharing is an increasingly popular form of social media
interaction. Research has shown that asynchronous sharing fosters and maintains
social connections and serves as a memory aid. More recently, researchers have
investigated the potential for mobile media sharing as a mechanism for
providing additional event-related information to spectators in a stadium. In
this paper, we describe CoStream, a novel system for mobile live sharing of
user-generated video in-situ during events. Developed iteratively with users,
CoStream goes beyond prior work by providing a strong real-time coupling to the
event, leveraging users' social connections to provide multiple perspectives on
the ongoing action. Field trials demonstrate that real time sharing of
different perspectives on the same event has the potential to provide
fundamentally new experiences of same-place events, such as concerts or stadium
sports. We discuss how CoStream enriches social interactions, increases
context, social and spatial awareness, and thus encourages active
spectatorship. We further contribute key requirements for the design of future
interfaces supporting the co-construction of shared experiences during events,
in-situ.
EarPut: augmenting behind-the-ear devices for ear-based interaction
Inputs
/
Lissermann, Roman
/
Huber, Jochen
/
Hadjakos, Aristotelis
/
Mühlhäuser, Max
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.1323-1328
© Copyright 2013 ACM
Summary: In this work-in-progress paper, we make a case for leveraging the unique
affordances of the human ear for eyes-free, mobile interaction. We present
EarPut, a novel interface concept, which instruments the ear as an interactive
surface for touch-based interactions and its prototypical hardware
implementation. The central idea behind EarPut is to go beyond prior work by
unobtrusively augmenting a variety of accessories that are worn behind the ear,
such as headsets or glasses. Results from a controlled experiment with 27
participants provide empirical evidence that people are able to target salient
regions on their ear effectively and precisely. Moreover, we contribute a
first, systematically derived interaction design space for ear-based
interaction and a set of exemplary applications.
Permulin: collaboration on interactive surfaces with personal in- and output
Tabletops and displays
/
Lissermann, Roman
/
Huber, Jochen
/
Steimle, Jürgen
/
Mühlhäuser, Max
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.1533-1538
© Copyright 2013 ACM
Summary: Interactive tables are well suited for co-located collaboration. Most prior
research assumed users to share the same overall display output; a key
challenge was the appropriate partitioning of screen real estate, assembling
the right information 'at the users' finger-tips through simultaneous input. A
different approach is followed in recent multi-view display environments: they
offer personal output for each team member, yet risk to dissolve the team due
to the lack of a common visual focus. Our approach combines both lines of
thought, guided by the question: "What if the visible output and simultaneous
input was partly shared and partly private?" We present Permulin as a concrete
corresponding implementation, based on a set of novel interaction concepts that
support fluid transitions between individual and group activities, coordination
of group activities, and concurrent, distraction-free in-place manipulation.
Study results indicate that users are able to focus on individual work on the
whole surface without notable mutual interference, while at the same time
establishing a strong sense of collaboration.
Permulin: personal in- and output on interactive surfaces
Interactivity: research
/
Lissermann, Roman
/
Huber, Jochen
/
Steimle, Jürgen
/
Mühlhäuser, Max
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.3083-3086
© Copyright 2013 ACM
Summary: Interactive tables are well suited for co-located collaboration. Most prior
research assumed users to share the same overall display output; a key
challenge was the appropriate partitioning of screen real estate, assembling
the right information "at the users' finger-tips" through simultaneous input. A
different approach is followed in recent multi-view display environments: they
offer personal output for each team member, yet risk to dissolve the team due
to the lack of a common visual focus. Our approach combines both lines of
thought, guided by the question: "What if the visible output and simultaneous
input was partly shared and partly private?" We present Permulin as a concrete
corresponding implementation, based on a set of novel interaction concepts that
support fluid transitions between individual, group activities and coordination
of group activities.
SmartObjects: second workshop on interacting with smart objects
Workshops
/
Schnelle-Walka, Dirk
/
Huber, Jochen
/
Lissermann, Roman
/
Brdiczka, Oliver
/
Luyten, Kris
/
Mühlhäuser, Max
Proceedings of the 2013 International Conference on Intelligent User
Interfaces
2013-03-19
v.2
p.113-114
© Copyright 2013 ACM
Summary: Smart objects are everyday objects that have computing capabilities and give
rise to new ways of interaction with our environment. The increasing number of
smart objects in our life shapes how we interact beyond the desktop. In this
workshop we explore various aspects of the design, development and deployment
of smart objects including how one can interact with smart objects.
LightBeam: interacting with augmented real-world objects in pico projections
Mobile augmented reality and mobile video
/
Huber, Jochen
/
Steimle, Jürgen
/
Liao, Chunyuan
/
Liu, Qiong
/
Mühlhäuser, Max
Proceedings of the 2012 International Conference on Mobile and Ubiquitous
Multimedia
2012-12-04
p.16
© Copyright 2012 ACM
Summary: Pico projectors have lately been investigated as mobile display and
interaction devices. We propose to use them as 'light beams': Everyday objects
sojourning in a beam are turned into dedicated projection surfaces and tangible
interaction devices. This way, our daily surroundings get populated with
interactive objects, each one temporarily chartered with a dedicated sub-issue
of pervasive interaction. While interaction with objects has been studied in
larger, immersive projection spaces, the affordances of pico projections are
fundamentally different: they have a very small, strictly limited field of
projection, and they are mobile. This paper contributes the results of an
exploratory field study on how people interact with everyday objects in pico
projections in nomadic settings. Based upon these results, we present novel
interaction techniques that leverage the limited field of projection and
trade-off between digitally augmented and traditional uses of everyday objects.
Toward a theory of interaction in mobile paper-digital ensembles
Beyond paper
/
Heinrichs, Felix
/
Schreiber, Daniel
/
Huber, Jochen
/
Mühlhäuser, Max
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.1897-1900
© Copyright 2012 ACM
Summary: Although smartphones and tablets become increasingly popular, pen and paper
continues to play an important role in mobile practices, such as note taking or
creative discussions. Applications designed to combine the benefits of both
worlds in a mobile paper-digital ensemble require a theoretical understanding
of interaction, to inform the design of adequate interaction techniques. To
fill this void, we propose a theory based on the results of a stimulus driven
exploratory study.
CoStream: in-situ co-construction of shared experiences through mobile video
sharing during live events
Work-in-progress
/
Dezfuli, Niloofar
/
Huber, Jochen
/
Olberding, Simon
/
Mühlhäuser, Max
Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing
Systems
2012-05-05
v.2
p.2477-2482
© Copyright 2012 ACM
Summary: Mobile live video broadcasting has become increasingly popular as means for
novel social media interactions. Recent research mainly focused on bridging
larger physical distances in large-scale events such as car racing, where
participants are unable to spectate from a certain location in the event. In
this paper, we advocate using live video streams not only over larger
distances, but also in-situ in closed events such as soccer matches or
concerts. We present CoStream, a mobile live video sharing system and present
its iterative design process. We used CoStream as an instrument in a field
study to investigate the in-situ co-construction of shared experiences during
live events. We contribute our findings and outline future work.
Leveraging the palm surface as an eyes-free tv remote control
Work-in-progress
/
Dezfuli, Niloofar
/
Khalilbeigi, Mohammadreza
/
Huber, Jochen
/
Müller, Florian
/
Mühlhäuser, Max
Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing
Systems
2012-05-05
v.2
p.2483-2488
© Copyright 2012 ACM
Summary: User input on television typically requires a mediator device such as a
handheld remote control. While being a well-established interaction paradigm, a
handheld device has serious drawbacks: it can be easily misplaced due to its
mobility and in case of a touch screen interface, it also requires additional
visual attention. Emerging interaction paradigms like 3D mid-air gestures using
novel depth sensors such as Microsoft's Kinect aim at overcoming these
limitations, but are known for instance to be tiring. In this paper, we propose
to leverage the palm as an interactive surface for TV remote control. Our
contribution is two-fold: (1) we have explored the conceptual design space in
an exploratory study. (2) Based upon these results, we investigated the
accuracy and effectiveness of such an interface in a controlled experiment. Our
results show that the palm has the potential to be leveraged for device-less
and eyes-free TV interactions without any third-party mediator device.