FreeTop: Finding Free Spots for Projective Augmentation
Late-Breaking Works: Engineering of Interactive Systems
/
Riemann, Jan
/
Khalilbeigi, Mohammadreza
/
Schmitz, Martin
/
Doeweling, Sebastian
/
Müller, Florian
/
Mühlhäuser, Max
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.1598-1606
© Copyright 2016 ACM
Summary: Augmenting the physical world using projection technologies or head-worn
displays becomes increasingly popular in research and commercial applications.
However, a common problem is interference between the physical surface's
texture and the projection. In this paper, we present FreeTop, a combined
approach to finding areas suitable for projection, which considers multiple
aspects influencing projection quality, like visual texture and physical
surface structure. FreeTop can be used in stationary and mobile settings for
locating free areas in arbitrary physical settings suitable for projective
augmentation and touch interaction.
ProxiWatch: Enhancing Smartwatch Interaction through Proximity-based Hand
Input
Late-Breaking Works: Novel Interactions
/
Müller, Florian
/
Günther, Sebastian
/
Dezfuli, Niloofar
/
Khalilbeigi, Mohammadreza
/
Mühlhäuser, Max
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.2617-2624
© Copyright 2016 ACM
Summary: Smartwatches allow ubiquitous and mobile interaction with digital contents.
Because of the small screen sizes, traditional interaction techniques are often
not applicable. In this work, we show how the degree of freedom offered by the
elbow joint, i.e., flexion and extension, can be leveraged as an additional
one-handed input modality for smartwatches. By moving the watch towards or away
from the body, the user is able to provide input to the smartwatch without a
second hand. We present the results of a controlled experiment focusing on the
human capabilities for proximity-based interaction. Based on the results, we
propose guidelines for designing proximity-based smartwatch interfaces and
present ProxiWatch: a one-handed and proximity-based input modality for
smartwatches alongside a prototypical implementation.
Liquido: Embedding Liquids into 3D Printed Objects to Sense Tilting and
Motion
Late-Breaking Works: Novel Interactions
/
Schmitz, Martin
/
Leister, Andreas
/
Dezfuli, Niloofar
/
Riemann, Jan
/
Müller, Florian
/
Mühlhäuser, Max
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.2688-2696
© Copyright 2016 ACM
Summary: Tilting and motion are widely used as interaction modalities in smart
objects such as wearables and smart phones (e.g., to detect posture or
shaking). They are often sensed with accelerometers. In this paper, we propose
to embed liquids into 3D printed objects while printing to sense various
tilting and motion interactions via capacitive sensing. This method reduces the
assembly effort after printing and is a low-cost and easy-to-apply way of
extending the input capabilities of 3D printed objects. We contribute two
liquid sensing patterns and a practical printing process using a standard
dual-extrusion 3D printer and commercially available materials. We validate the
method by a series of evaluations and provide a set of interactive example
applications.
What Belongs Together Comes Together: Activity-centric Document Clustering
for Information Work
User Modelling
/
Seeliger, Alexander
/
Schmidt, Benedikt
/
Schweizer, Immanuel
/
Mühlhäuser, Max
Proceedings of the 2016 International Conference on Intelligent User
Interfaces
2016-03-07
v.1
p.60-70
© Copyright 2016 ACM
Summary: Multitasking and interruptions in information work make frequent activity
switches necessary. Individuals need to recall and restore earlier states of
work which generally involves retrieval of information objects. To avoid
resulting tooling time an activity-centric organization of information objects
has been proposed. For each activity a collection with related information
objects (like documents, websites etc.) is created to improve information
access and serve as a memory aid. While the manual maintenance of such
information collections is a tedious task and becomes an interruption on its
own, the automatic maintenance of such collections using activity mining is
promising. Activity mining utilizes interaction histories to extract unique
activities based on the stream of interaction with information objects. For
activity mining, existing work shows varying success in limited study setups.
In this paper, we present a method for activity mining to generate
activity-centric information object collections automatically from interaction
histories. The technique is a hybrid approach considering all information types
used in previous work -- activity stream and accessed content related
information. Method performance is evaluated based on interaction histories
collected during real work data from eight information workers collected over
several weeks. For the dataset our hybrid approach shows on average a
performance of 0.53 ARI up to 0.77 ARI, outperforming single metric-based
approaches.
SCWT: A Joint Workshop on Smart Connected and Wearable Things
Workshops
/
Schnelle-Walka, Dirk
/
Limonad, Lior
/
Grosse-Puppendahl, Tobias
/
Lanir, Joel
/
Müller, Florian
/
Mecella, Massimo
/
Luyten, Kris
/
Kuflik, Tsvi
/
Brdiczka, Oliver
/
Mühlhäuser, Max
Companion Proceedings of the 2016 International Conference on Intelligent
User Interfaces
2016-03-07
v.2
p.3-5
© Copyright 2016 ACM
Summary: The increasing number of smart objects in our everyday life shapes how we
interact beyond the desktop. In this workshop we discuss how advanced
interactions with smart objects in the context of the Internet-of-Things should
be designed from various perspectives, such as HCI and AI as well as industry
and academia.
Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch
Sensors for Interactive Objects
Session 4A: Fabrication 2 -- Flexible and Printed Electronics
/
Schmitz, Martin
/
Khalilbeigi, Mohammadreza
/
Balwierz, Matthias
/
Lissermann, Roman
/
Mühlhäuser, Max
/
Steimle, Jürgen
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.253-258
© Copyright 2015 ACM
Summary: 3D printing is widely used to physically prototype the look and feel of 3D
objects. Interaction possibilities of these prototypes, however, are often
limited to mechanical parts or post-assembled electronics. In this paper, we
present Capricate, a fabrication pipeline that enables users to easily design
and 3D print highly customized objects that feature embedded capacitive
multi-touch sensing. The object is printed in a single pass using a commodity
multi-material 3D printer. To enable touch input on a wide variety of 3D
printable surfaces, we contribute two techniques for designing and printing
embedded sensors of custom shape. The fabrication pipeline is technically
validated by a series of experiments and practically validated by a set of
example applications. They demonstrate the wide applicability of Capricate for
interactive objects.
In-Situ Occlusion Resolution for Hybrid Tabletop Environments
Interactive Tabletops
/
Riemann, Jan
/
Khalilbeigi, Mohammadreza
/
Mühlhäuser, Max
Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part III
2015-09-14
v.3
p.278-295
Keywords: Interactive tabletops; Occlusion awareness; Hybrid interaction; Peripheral
displays; Multitouch
© Copyright 2015 Springer International Publishing Switzerland
Summary: In this paper we explore the use of in situ occlusion resolution in mixed
physical/digital tabletop scenarios. We propose the extension of back-projected
tabletops with interactive top-projection to turn the physical object's surface
into peripheral displays. These displays are used to resolve occlusion in situ
without the need to use additional tabletop display space and keeping the
spatial perception of the occluded objects. We contribute a visualization
concept and a set of interaction techniques for in situ occlusion resolution
and easy access to occluded objects. The techniques are implemented in a system
named ProjecTop, which is evaluated in an quantitative user study. The study
results highlight how top-projection can be beneficially used. We conclude with
a set of design implications derived from the study's results.
Engineering interactive systems with SCXML
Workshop summaries
/
Schnelle-Walka, Dirk
/
Radomski, Stefan
/
Barnett, Jim
/
Mühlhäuser, Max
ACM SIGCHI 2015 Symposium on Engineering Interactive Computing Systems
2015-06-23
p.298-299
© Copyright 2015 ACM
Summary: The W3C SCXML standard for Harel state-charts, in unison with the W3C MMI
architecture specification and related work from the W3C MMI working group are
a promising suite of recommendations to become the "HTML of multimodal
applications". This 2nd installment of the workshop will provide a forum for
academia and industry alike to discuss recent developments with regard to
dialog modeling using state-charts and identify remaining short-comings in the
operationalization and application of the related approaches.
StackTop: Hybrid Physical-Digital Stacking on Interactive Tabletops
WIP Theme: Displays
/
Riemann, Jan
/
Khalilbeigi, Mohammadreza
/
Dezfuli, Niloofar
/
Mühlhäuser, Max
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.1127-1132
© Copyright 2015 ACM
Summary: The concurrent use of digital and physical documents on interactive surfaces
is becoming more and more common. However, the integration of both document
types is limited, one example being the ability to stack documents. In this
paper we propose StackTop, an integrated system supporting ordered hybrid
digital/physical piling (hybrid stacking) on interactive surfaces. This allows
for a tighter physical/digital integration in hybrid workspaces and provides a
more consistent approach when working with hybrid document sets.
SmartObjects: Fourth Workshop on Interacting with Smart Objects
Workshops
/
Schnelle-Walka, Dirk
/
Mühlhäuser, Max
/
Radomski, Stefan
/
Brdiczka, Oliver
/
Huber, Jochen
/
Luyten, Kris
/
Grosse-Puppendahl, Tobias
Proceedings of the 2015 International Conference on Intelligent User
Interfaces
2015-03-29
v.1
p.453-454
© Copyright 2015 ACM
Summary: The increasing number of smart objects in our everyday life shapes how we
interact beyond the desktop. In this workshop we discussed how the interaction
with these smart objects should be designed from various perspectives. This
year's workshop put a special focus on affective computing with smart objects,
as reflected by the keynote talk.
EarPut: augmenting ear-worn devices for ear-based interaction
User experience
/
Lissermann, Roman
/
Huber, Jochen
/
Hadjakos, Aristotelis
/
Nanayakkara, Suranga
/
Mühlhäuser, Max
Proceedings of the 2014 Australian Computer-Human Interaction Conference
2014-12-02
p.300-307
© Copyright 2014 ACM
Summary: One of the pervasive challenges in mobile interaction is decreasing the
visual demand of interfaces towards eyes-free interaction. In this paper, we
focus on the unique affordances of the human ear to support one-handed and
eyes-free mobile interaction. We present EarPut, a novel interface concept and
hardware prototype, which unobtrusively augments a variety of accessories that
are worn behind the ear (e.g. headsets or glasses) to instrument the human ear
as an interactive surface. The contribution of this paper is three-fold. We
contribute (i) results from a controlled experiment with 27 participants,
providing empirical evidence that people are able to target salient regions on
their ear effectively and precisely, (ii) a first, systematically derived
design space for ear-based interaction and (iii) a set of proof of concept
EarPut applications that leverage on the design space and embrace mobile media
navigation, mobile gaming and smart home interaction.
Making Tabletop Interaction Accessible for Blind Users
Posters
/
Kunz, Andreas
/
Schnelle-Walka, Dirk
/
Alavi, Ali
/
Pölzer, Stephan
/
Mühlhäuser, Max
/
Miesenberger, Klaus
Proceedings of the 2014 ACM International Conference on Interactive
Tabletops and Surfaces
2014-11-16
p.327-332
© Copyright 2014 ACM
Summary: Tabletop systems and their interaction capabilities are typically a domain
for sighted people only. While the content on the tabletop can already be made
accessible to blind people, the interaction above the tabletop is still
inaccessible. This paper describes our approach towards making the above
tabletop interaction accessible to blind people by using LEAP sensors and
speech recognition.
PalmRC: leveraging the palm surface as an imaginary eyes-free television
remote control
/
Dezfuli, Niloofar
/
Khalilbeigi, Mohammadreza
/
Huber, Jochen
/
Özkorkmaz, Murat
/
Mühlhäuser, Max
Behaviour and Information Technology
2014-08-03
v.33
n.8
p.829-843
© Copyright 2014 Taylor and Francis
Summary: User input on television (TV) typically requires a mediator device such as a
handheld remote control. While this is a well-established interaction paradigm,
a handheld device has serious drawbacks: it can be easily misplaced due to its
mobility and in case of a touch screen interface, it also requires additional
visual attention. Emerging interaction paradigms such as 3D mid-air gestures
using novel depth sensors (e.g. Microsoft Kinect), aim at overcoming these
limitations, but are known to be tiring. In this article, we propose to
leverage the palm as an interactive surface for TV remote control. We present
three user studies which set the base for our four contributions: We (1)
qualitatively explore the conceptual design space of the proposed imaginary
palm-based remote control in an explorative study, (2) quantitatively
investigate the effectiveness and accuracy of such an interface in a controlled
experiment, (3) identified user acceptance in a controlled laboratory
evaluation comparing PalmRC concept with two most typical existing input
modalities, here conventional remote control and touch-based remote control
interfaces on smart phones for their user experience, task load, as well as
overall preference, and (4) contribute PalmRC, an eyes-free, palm-surface-based
TV remote control. Our results show that the palm has the potential to be
leveraged for device-less eyes-free TV remote interaction without any
third-party mediator device.
Multimodal Fusion and Fission within W3C Standards for Nonverbal
Communication with Blind Persons
Accessibility of Non-verbal Communication
/
Schnelle-Walka, Dirk
/
Radomski, Stefan
/
Mühlhäuser, Max
ICCHP'14: International Conference on Computers Helping People with Special
Needs, Part 1
2014-07-09
v.1
p.209-213
© Copyright 2014 Springer International Publishing
Summary: Multimodal fusion and multimodal fission are well known concepts for
multimodal systems but have not been well integrated in current architectures
to support collaboration of blind and sighted people. In this paper we describe
our initial thoughts of multimodal dialog modeling in multiuser dialog settings
employing multiple modalities based on W3C standards like the Multimodal
Architecture and Interfaces.
A Mind Map for Brainstorming Sessions with Blind and Sighted Persons
Accessibility of Non-verbal Communication
/
Schnelle-Walka, Dirk
/
Alavi, Ali
/
Ostie, Patrick
/
Mühlhäuser, Max
/
Kunz, Andreas
ICCHP'14: International Conference on Computers Helping People with Special
Needs, Part 1
2014-07-09
v.1
p.214-219
Keywords: Accessibility; Non-verbal Communication Clements; Computer Supported
Collaborative Work; MindMap
© Copyright 2014 Springer International Publishing
Summary: Accessible mind maps tools are, due to their visual nature hardly available
and, if available, they focus on rendering the structure, not considering
nonverbal communication elements in ongoing discussions. In this paper, we
describe the need for this type of communication as well as a mind map tool
that is capable of processing the respective information, coming from a Leap
tracking system attached to the interactive surface.
Towards an Information State Update Model Approach for Nonverbal
Communication
Accessibility of Non-verbal Communication
/
Schnelle-Walka, Dirk
/
Radomski, Stefan
/
Radeck-Arneth, Stephan
/
Mühlhäuser, Max
ICCHP'14: International Conference on Computers Helping People with Special
Needs, Part 1
2014-07-09
v.1
p.226-230
© Copyright 2014 Springer International Publishing
Summary: The Information State Update (ISU) Model describes an approach to dialog
management that was predominantly applied to single user scenarios using voice
as the only modality. Extensions to multimodal interaction with multiple users
are rarely considered and, if presented, hard to operationalize. In this paper
we describe our approach of dialog modeling based on ISU in multiuser dialog
settings employing multiple modalities, including nonverbal communication.
Accessibility of Brainstorming Sessions for Blind People
Accessibility of Non-verbal Communication
/
Kunz, Andreas
/
Miesenberger, Klaus
/
Mühlhäuser, Max
/
Alavi, Ali
/
Pölzer, Stephan
/
Pöll, Daniel
/
Heumader, Peter
/
Schnelle-Walka, Dirk
ICCHP'14: International Conference on Computers Helping People with Special
Needs, Part 1
2014-07-09
v.1
p.237-244
Keywords: Accessibility; Mind map; Non-verbal Communication Elements
© Copyright 2014 Springer International Publishing
Summary: Today, research focuses on the accessibility of explicit information for
blind users. This gives only partly access to the information flow in
brain-storming sessions, since non-verbal communication is not supported.
Advances in ICT however allow capturing implicit information like hand gestures
as important part of non-verbal communication. Thus, we describe a system that
al-lows integrating blind people into a brainstorming session using a mind map.
Engineering interactive systems with SCXML
Workshop summaries
/
Schnelle-Walka, Dirk
/
Radomski, Stefan
/
Lager, Torbjörn
/
Barnett, Jim
/
Dahl, Deborah
/
Mühlhäuser, Max
ACM SIGCHI 2014 Symposium on Engineering Interactive Computing Systems
2014-06-17
p.295-296
© Copyright 2014 ACM
Summary: The W3C is about to finalize the SCXML standard to express Harel
state-machines as XML documents. In unison with the W3C MMI architecture
specification and related work from the W3C MMI working group, this
recommendation might be a promising candidate to become the HTML of multi-modal
applications".
Permulin: mixed-focus collaboration on multi-view tabletops
Head-worn displays
/
Lissermann, Roman
/
Huber, Jochen
/
Schmitz, Martin
/
Steimle, Jürgen
/
Mühlhäuser, Max
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3191-3200
© Copyright 2014 ACM
Summary: We contribute Permulin, an integrated set of interaction and visualization
techniques for multi-view tabletops to support co-located collaboration across
a wide variety of collaborative coupling styles. These techniques (1) provide
support both for group work and for individual work, as well as for the
transitions in-between, (2) contribute sharing and peeking techniques to
support mutual awareness and group coordination during phases of individual
work, (3) reduce interference during group work on a group view, and (4)
directly integrate with conventional multi-touch input. We illustrate our
techniques in a proof-of-concept implementation with the two example
applications of map navigation and photo collages. Results from two user
studies demonstrate that Permulin supports fluent transitions between
individual and group work and exhibits unique awareness properties that allow
participants to be highly aware of each other during tightly coupled
collaboration, while being able to unobtrusively perform individual work during
loosely coupled collaboration.
SmartObjects: third workshop on interacting with smart objects
Workshop summaries
/
Schnelle-Walka, Dirk
/
Huber, Jochen
/
Radomski, Stefan
/
Brdiczka, Oliver
/
Luyten, Kris
/
Mühlhäuser, Max
Companion Proceedings of the 2014 International Conference on Intelligent
User Interfaces
2014-02-24
v.2
p.45-46
© Copyright 2014 ACM
Summary: The increasing number of smart objects in our everyday life shapes how we
interact beyond the desktop. In this workshop we discuss how interaction with
these smart objects should be designed from various perspectives.
ObjecTop: occlusion awareness of physical objects on interactive tabletops
Latency and occlusion + CSCW
/
Khalilbeigi, Mohammadreza
/
Steimle, Jürgen
/
Riemann, Jan
/
Dezfuli, Niloofar
/
Mühlhäuser, Max
/
Hollan, James D.
Proceedings of the 2013 ACM International Conference on Interactive
Tabletops and Surfaces
2013-10-06
p.255-264
© Copyright 2013 ACM
Summary: In this paper, we address the challenges of occlusion created by physical
objects on interactive tabletops. We contribute an integrated set of
interaction techniques designed to cope with the physical occlusion problem as
well as facilitate organizing objects in hybrid settings. These techniques are
implemented in ObjecTop, a system to support tabletop display applications
involving both physical and virtual objects. We compile design requirements for
occlusion-aware tabletop systems and conduct the first in-depth user study
comparing ObjecTop with conventional tabletop interfaces in search and layout
tasks. The empirical results show that occlusion-aware techniques outperform
the conventional tabletop interface. Furthermore, our findings indicate that
physical properties of occluders dramatically influence which strategy users
employ to cope with occlusion. We conclude with a set of design implications
derived from the study.
PeriTop: extending back-projected tabletops with top-projected peripheral
displays
Poster
/
Riemann, Jan
/
Khalilbeigi, Mohammadreza
/
Mühlhäuser, Max
Proceedings of the 2013 ACM International Conference on Interactive
Tabletops and Surfaces
2013-10-06
p.349-352
© Copyright 2013 ACM
Summary: Integrating digital tabletops into homes or desktop environments will give
rise to a set of problems emerging from placing everyday objects on interactive
tabletops. Chief among them is the arbitrary placement of physical objects that
considerably limits the digital working space on the surface of tabletops. In
this paper we contribute PeriTop, an interactive back-projected tabletop system
which exploit the surface of physical objects and tabletop rims as additional
interactive displays to represent and interact with digital objects. This is
realized by augmenting the tabletop system with an inexpensive pico
projector-depth camera pair. We support the PeriTop approach by depicting
several salient use case scenarios aiding users in performing activities on
hybrid physical-digital tabletop settings.
Interchanging and preserving presentation recordings
Multimedia II
/
Höver, Kai Michael
/
Mühlhäuser, Max
Proceedings of the 2013 ACM Symposium on Document Engineering
2013-09-10
p.277-280
© Copyright 2013 ACM
Summary: The importance of presentation recordings is steadily increasing. This trend
is indicated for example by the growing MOOCs market. Many systems for the
production of such recordings exist. However, produced recordings are not
exchangeable between systems due to different representation formats. In this
paper, we present an ontology for the conceptual description of presentation
recordings and describe the transformation process between different systems.
Furthermore, we explain how this ontology can be used to preserve presentation
recordings as ebooks.
CoStream: co-construction of shared experiences through mobile live video
sharing
Innovative interaction
/
Dezuli, Niloofar
/
Huber, Jochen
/
Churchill, Elizabeth F.
/
Mühlhäuser, Max
Proceedings of the 27th BCS International Conference on Human-Computer
Interaction
2013-09-09
p.6
© Copyright 2013 Authors
Summary: Mobile media sharing is an increasingly popular form of social media
interaction. Research has shown that asynchronous sharing fosters and maintains
social connections and serves as a memory aid. More recently, researchers have
investigated the potential for mobile media sharing as a mechanism for
providing additional event-related information to spectators in a stadium. In
this paper, we describe CoStream, a novel system for mobile live sharing of
user-generated video in-situ during events. Developed iteratively with users,
CoStream goes beyond prior work by providing a strong real-time coupling to the
event, leveraging users' social connections to provide multiple perspectives on
the ongoing action. Field trials demonstrate that real time sharing of
different perspectives on the same event has the potential to provide
fundamentally new experiences of same-place events, such as concerts or stadium
sports. We discuss how CoStream enriches social interactions, increases
context, social and spatial awareness, and thus encourages active
spectatorship. We further contribute key requirements for the design of future
interfaces supporting the co-construction of shared experiences during events,
in-situ.
Linked data selectors
LILE'13 session 2
/
Höver, Kai Michael
/
Mühlhäuser, Max
Companion Proceedings of the 2013 International Conference on the World Wide
Web
2013-05-13
v.2
p.439-444
© Copyright 2013 ACM
Summary: In the world of Linked Data, HTTP URIs are names. A URI is dereferenced to
obtain a copy or description of the referred resource. If only a fragment of a
resource should be referred, pointing to the whole resource is not sufficient.
Therefore, it is necessary to be able to refer to fragments of resources, and
to name them with URIs to interlink them in the Web of Data. This is especially
helpful in the educational context where learning processes including
discussion and social interaction demand for exact references and granular
selections of media. This paper presents the specification of Linked Data
Selectors, an OWL ontology for describing dereferenceable fragments of Web
resources.