Smartphone Notifications in Context: a Case Study on Receptivity by the
Example of an Advertising Service
Late-Breaking Works: Interaction in Specific Domains
/
Westermann, Tilo
/
Wechsung, Ina
/
Möller, Sebastian
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.2355-2361
© Copyright 2016 ACM
Summary: Notifications on smartphones are ubiquitous; they are providing a broad
range of information, from rather technical (e.g. app updates) to interpersonal
(e.g. a message from a friend). The disruptive nature poses the challenge of
finding opportune moments for delivery of notifications, and receptivity to
notifications depends on various factors that include perceived urgency and
time of delivery. This paper presents a case study with 126,000 participants
investigating the effect of the factor time on receptivity to notifications on
smartphones in the context of an advertising service. Results show significant
differences for weekdays and time of day regarding response times and number of
notification-triggered application launches. We conclude with a discussion on
the key findings and propose design implications for push notification
campaigns.
Development and Validation of Extrinsic Motivation Scale for Crowdsourcing
Micro-task Platforms
Crowdworkers' Motivation
/
Naderi, Babak
/
Wechsung, Ina
/
Polzehl, Tim
/
Möller, Sebastian
Proceedings of the 2014 International Workshop on Crowdsourcing for
Multimedia
2014-11-07
p.31-36
© Copyright 2014 ACM
Summary: In this paper, we introduce a scale for measuring the extrinsic motivation
of crowd workers. The new questionnaire is strongly based on the Work Extrinsic
Intrinsic Motivation Scale (WEIMS) [17] and theoretically follows the
Self-Determination Theory (SDT) of motivation. The questionnaire has been
applied and validated in a crowdsourcing micro-task platform. This instrument
can be used for studying the dynamics of extrinsic motivation by taking into
account individual differences and provide meaningful insights which will help
to design a proper incentives framework for each crowd worker that eventually
leads to a better performance, an increased well-being, and higher overall
quality.
Affective Quality of Audio Feedback on Mobile Devices in Different Contexts
/
Seebode, Julia
/
Schleicher, Robert
/
Möller, Sebastian
International Journal of Mobile Human Computer Interaction
2014-10
v.6
n.4
p.1-21
© Copyright 2014 IGI Global
Summary: Sound is a common means to give feedback on mobile devices. Much research
has been conducted to examine the learnability and user performance with
systems that provide audio feedback. In many cases a training period is
necessary to understand the meaning of a specific feedback, because their
functional connotation may be ambiguous. Additionally, no standardized
evaluation method to measure the subjective quality of these messages has been
established; especially regarding the affective quality of feedback sounds. The
authors describe a series of experiments to investigate the affective
impression of audio feedback on mobile devices as well as their functional
meaning under varying contexts prototypical for mobile phone usage. Results
indicate that context influences the emotional impression and that there is a
relation between affective quality and functional appropriateness. These
findings confirm that emotional stimuli are suitable as feedback messages in
the context of mobile HCI and that context matters for the affective quality of
sounds emitted by mobile phones.
Classification of the Context of Use for Smart Phones
User Experience Case Studies
/
Reichmuth, Ralf
/
Möller, Sebastian
HCI International 2014: 16th International Conference on HCI: Posters'
Extended Abstracts, Part II
2014-06-22
v.5
p.638-642
Keywords: classification of the context of use; mobile context of use; influence
factors; mobile app
© Copyright 2014 Springer International Publishing
Summary: Mobile devices like smart phones are used in various contexts of use. Hence
we conducted an explorative field study to determine factors influencing smart
phone interaction. The results of the study suggest that a smart phone is often
used in a relaxed situation and a familiar environment. In contrast to this,
few interactions take place in a stressful situation. In addition to that, the
location and the activity of the test participant seem to have an impact on the
smart phone interaction.
Predicting task execution times by deriving enhanced cognitive models from
user interface development models
Model-based UIs session
/
Quade, Michael
/
Halbrügge, Marc
/
Engelbrecht, Klaus-Peter
/
Albayrak, Sahin
/
Möller, Sebastian
ACM SIGCHI 2014 Symposium on Engineering Interactive Computing Systems
2014-06-17
p.139-148
© Copyright 2014 ACM
Summary: Adaptive user interfaces (UI) offer the opportunity to adapt to changes in
the context, but this also poses the challenge of evaluating the usability of
many different versions of the resulting UI. Consequently, usability
evaluations tend to become very complex and time-consuming. We describe an
approach that combines model-based usability evaluation with development models
of adaptive UIs. In particular, we present how a cognitive user behavior model
can be created automatically from UI development models and thus save time and
costs when predicting task execution times. With the help of two usability
studies, we show that the resulting predictions can be further improved by
using information encoded in the UI development models.
EDITED BOOK
Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken
Dialog Systems into Practice
/
Mariani, Joseph
/
Rosset, Sophie
/
Garnier-Rizet, Martine
/
Devillers, Laurence
2014
p.397
Springer New York
== Spoken Dialog Systems in Everyday Applications ==
Spoken Language Understanding for Natural Interaction: The Siri Experience (3-14)
+ Bellegarda, Jerome R.
Development of Speech-Based In-Car HMI Concepts for Information Exchange Internet Apps (15-28)
+ Hofmann, Hansjörg
+ Silberstein, Anna
+ Ehrlich, Ute
+ Berton, André
+ Müller, Christian
+ Mahr, Angela
Real Users and Real Dialog Systems: The Hard Challenge for SDS (29-36)
+ Black, Alan W.
+ Eskenazi, Maxine
A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine (37-47)
+ Sonntag, Daniel
+ Schulz, Christian
== Spoken Dialog Prototypes and Products ==
Yochina: Mobile Multimedia and Multimodal Crosslingual Dialogue System (51-57)
+ Xu, Feiyu
+ Schmeier, Sven
+ Ai, Renlong
+ Uszkoreit, Hans
Walk This Way: Spatial Grounding for City Exploration (59-67)
+ Boye, Johan
+ Fredriksson, Morgan
+ Götze, Jana
+ Gustafson, Joakim
+ Königsmann, Jürgen
Multimodal Dialogue System for Interaction in AmI Environment by Means of File-Based Services (69-77)
+ Ábalos, Nieves
+ Espejo, Gonzalo
+ López-Cózar, Ramón
+ Ballesteros, Francisco J.
+ Soriano, Enrique
+ Guardiola, Gorka
Development of a Toolkit Handling Multiple Speech-Oriented Guidance Agents for Mobile Applications (79-85)
+ Hara, Sunao
+ Kawanami, Hiromichi
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
Providing Interactive and User-Adapted E-City Services by Means of Voice Portals (87-98)
+ Griol, David
+ García-Jiménez, María
+ Callejas, Zoraida
+ López-Cózar, Ramón
== Multi-domain, Crosslingual Spoken Dialog Systems ==
Efficient Language Model Construction for Spoken Dialog Systems by Inducting Language Resources of Different Languages (101-110)
+ Misu, Teruhisa
+ Matsuda, Shigeki
+ Mizukami, Etsuo
+ Kashioka, Hideki
+ Li, Haizhou
Towards Online Planning for Dialogue Management with Rich Domain Knowledge (111-123)
+ Lison, Pierre
A Two-Step Approach for Efficient Domain Selection in Multi-Domain Dialog Systems (125-131)
+ Lee, Injae
+ Kim, Seokhwan
+ Kim, Kyungduk
+ Lee, Donghyeon
+ Choi, Junhwi
+ Ryu, Seonghan
+ Lee, Gary Geunbae
== Human-Robot Interaction ==
From Informative Cooperative Dialogues to Long-Term Social Relation with a Robot (135-151)
+ Buendia, Axel
+ Devillers, Laurence
Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System (153-165)
+ Nakashima, Taichi
+ Komatani, Kazunori
+ Sato, Satoshi
Investigating the Social Facilitation Effect in Human--Robot Interaction (167-177)
+ Wechsung, Ina
+ Ehrenbrink, Patrick
+ Schleicher, Robert
+ Möller, Sebastian
More Than Just Words: Building a Chatty Robot (179-185)
+ Gilmartin, Emer
+ Campbell, Nick
Predicting When People Will Speak to a Humanoid Robot (187-198)
+ Sugiyama, Takaaki
+ Komatani, Kazunori
+ Sato, Satoshi
Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction (199-211)
+ Chastagnol, Clément
+ Clavel, Céline
+ Courgeon, Matthieu
+ Devillers, Laurence
Multimodal Open-Domain Conversations with the Nao Robot (213-224)
+ Jokinen, Kristiina
+ Wilcock, Graham
Component Pluggable Dialogue Framework and Its Application to Social Robots (225-237)
+ Jiang, Ridong
+ Tan, Yeow Kee
+ Limbu, Dilip Kumar
+ Dung, Tran Anh
+ Li, Haizhou
== Spoken Dialog Systems Components ==
Visual Contribution to Word Prominence Detection in a Playful Interaction Setting (241-247)
+ Heckmann, Martin
Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface (249-259)
+ Ons, Bart
+ Gemmeke, Jort F.
+ Van hamme, Hugo
Topic Classification of Spoken Inquiries Using Transductive Support Vector Machine (261-267)
+ Torres, Rafael
+ Kawanami, Hiromichi
+ Matsui, Tomoko
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
Frame-Level Selective Decoding Using Native and Non-native Acoustic Models for Robust Speech Recognition to Native and Non-native Speech (269-274)
+ Oh, Yoo Rhee
+ Chung, Hoon
+ Kang, Jeom-ja
+ Lee, Yun Keun
Analysis of Speech Under Stress and Cognitive Load in USAR Operations (275-281)
+ Charfuelan, Marcela
+ Kruijff, Geert-Jan
== Dialog Management ==
Does Personality Matter? Expressive Generation for Dialogue Interaction (285-301)
+ Walker, Marilyn A.
+ Sawyer, Jennifer
+ Lin, Grace
+ Wing, Sam
Application and Evaluation of a Conditioned Hidden Markov Model for Estimating Interaction Quality of Spoken Dialogue Systems (303-312)
+ Ultes, Stefan
+ ElChab, Robert
+ Minker, Wolfgang
FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager (313-325)
+ Morbini, Fabrizio
+ DeVault, David
+ Sagae, Kenji
+ Gerten, Jillian
+ Nazarian, Angela
+ Traum, David
A Clustering Approach to Assess Real User Profiles in Spoken Dialogue Systems (327-334)
+ Callejas, Zoraida
+ Griol, David
+ Engelbrecht, Klaus-Peter
+ López-Cózar, Ramón
What Are They Achieving Through the Conversation? Modeling Guide--Tourist Dialogues by Extended Grounding Networks (335-341)
+ Mizukami, Etsuo
+ Kashioka, Hideki
Co-adaptation in Spoken Dialogue Systems (343-353)
+ Chandramohan, Senthilkumar
+ Geist, Matthieu
+ Lefèvre, Fabrice
+ Pietquin, Olivier
Developing Non-goal Dialog System Based on Examples of Drama Television (355-361)
+ Nio, Lasguido
+ Sakti, Sakriani
+ Neubig, Graham
+ Toda, Tomoki
+ Adriani, Mirna
+ Nakamura, Satoshi
A User Model for Dialog System Evaluation Based on Activation of Subgoals (363-374)
+ Engelbrecht, Klaus-Peter
Real-Time Feedback System for Monitoring and Facilitating Discussions (375-387)
+ Sarda, Sanat
+ Constable, Martin
+ Dauwels, Justin
+ Shoko Dauwels (Okutsu), +
+ Elgendi, Mohamed
+ Mengyu, Zhou
+ Rasheed, Umer
+ Tahir, Yasir
+ Thalmann, Daniel
+ Magnenat-Thalmann, Nadia
Evaluation of Invalid Input Discrimination Using Bag-of-Words for Speech-Oriented Guidance System (389-397)
+ Majima, Haruka
+ Torres, Rafael
+ Kawanami, Hiromichi
+ Hara, Sunao
+ Matsui, Tomoko
+ Saruwatari, Hiroshi
+ Shikano, Kiyohiro
Investigating the affective impression of tactile feedback on mobile devices
Innovative interaction
/
Seebode, Julia
/
Schleicher, Robert
/
Wechsung, Ina
/
Möller, Sebastian
Proceedings of the 27th BCS International Conference on Human-Computer
Interaction
2013-09-09
p.4
© Copyright 2013 Authors
Summary: On mobile devices, vibrotactile messages are a common way to give feedback
to the user. They might be a less obtrusive means to communicate information
about the system status compared to auditory feedback. Much research has
focused on the possibilities to perceive and discriminate different
vibrotactile messages, less regarding her contentual interpretation. We
describe a series of two studies. Aim of the pilot study was to find meaningful
vibrotactile messages of which we then wanted to investigate the affective
impression and functional connotation on a mobile device within varying staged
contexts. Results show that the affective impression of those so-called Tactons
is independent of the context. Moreover, we observed a relation between ratings
of affective quality and functional applicability. We conclude that tactile
feedback messages are unobtrusive, but have to be designed carefully to convey
their intended meaning in a working context as well as in a leisure time
situation.
Did you notice?: neuronal processing of multimodal mobile phone feedback
Evaluation and design methods
/
Antons, Jan-Niklas
/
Arndt, Sebastian
/
Seebode, Julia
/
Schleicher, Robert
/
Möller, Sebastian
Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing
Systems
2013-04-27
v.2
p.325-330
© Copyright 2013 ACM
Summary: To acknowledge information received by a mobile device, a number of feedback
modalities are available for which human information processing is still not
completely understood. This paper focuses on how different feedback modalities
are perceived by users introducing a test method that is new in this field of
research. The evaluation is done via standard self-assessment and by analyzing
brain activity [electroencephalogram (EEG)]. We conducted an experiment with
unimodal and multi-modal feedback combinations, and compared behavioral user
data to EEG data. We could show that EEG is a feasible method for quantifying
conscious processing of feedback in different modalities as it correlates
highly with subjective ratings. EEG can thus be considered an additional tool
for assessing the effectiveness of feedback, revealing conscious and potential
non-conscious information processing.
Affective quality of audio feedback in different contexts
Audio & music
/
Seebode, Julia
/
Schleicher, Robert
/
Möller, Sebastian
Proceedings of the 2012 International Conference on Mobile and Ubiquitous
Multimedia
2012-12-04
p.32
© Copyright 2012 ACM
Summary: To give feedback on mobile devices, sound is commonly used in different
ways. Much research has focused on the learnability and user performance with
systems that have audio feedback. But so far, there is no standardized method
to evaluate the subjective quality of auditory feedback messages. We describe a
study to investigate the affective impression of short audio feedback on mobile
devices and their functional connotation in three different contexts. Results
show that context influences the affective impression of sounds and that there
is a relation between ratings according affective quality and functional
applicability. We conclude that sounds can be unobtrusive, but still convey
their intended meaning in a working context as well as in a leisure time
situation without being perceived as disturbing.
MoCCha: a mobile campus app for analyzing user behavior in the field
Posters
/
Westermann, Tilo
/
Möller, Sebastian
Proceedings of the 7th Nordic Conference on Human-Computer Interaction
2012-10-14
p.799-800
© Copyright 2012 ACM
Summary: In this paper, we present MoCCha, a mobile campus application used not only
as a subject of research, but as a research platform for a number of scientific
disciplines. Using apps that are available from mobile application stores
enables studying user behavior in the field with the aim for ecological
validity that human-subject studies in lab environments are potentially
missing.
Using device models for analyzing user interaction problems
Posters and demonstrations
/
Schulz, Matthias
/
Schmidt, Stefan
/
Engelbrecht, Klaus-Peter
/
Möller, Sebastian
Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies
2011-10-24
p.303-304
© Copyright 2011 ACM
Summary: This paper presents work in progress which aims at analyzing the origins of
interaction problems which certain users have when interacting with new
technology. Our analysis is based on device models which categorize certain
classes of devices via a pre-defined set of features. We provide examples which
show that usability problems are partially caused by an erroneous transfer of
device features to new/unknown devices.
On the need for different security methods on mobile phones
Work and security
/
Ben-Asher, Noam
/
Kirschnick, Niklas
/
Sieger, Hanul
/
Meyer, Joachim
/
Ben-Oved, Asaf
/
Möller, Sebastian
Proceedings of the 13th Conference on Human-computer interaction with mobile
devices and services
2011-08-30
p.465-473
© Copyright 2011 ACM
Summary: Mobile phones are rapidly becoming small-size general purpose computers,
so-called smartphones. However, applications and data stored on mobile phones
are less protected from unauthorized access than on most desktop and mobile
computers. This paper presents a survey on users' security needs, awareness and
concerns in the context of mobile phones. It also evaluates acceptance and
perceived protection of existing and novel authentication methods. The
responses from 465 participants reveal that users are interested in increased
security and data protection. The current protection by using PIN (Personal
Identification Number) is perceived as neither adequate nor convenient in all
cases. The sensitivity of data stored on the devices varies depending on the
data type and the context of use, asking for the need for another level of
protection. According to these findings, a two-level security model for mobile
phones is proposed. The model provides differential data and service protection
by utilizing existing capabilities of a mobile phone for authenticating users.
A Model of Shortcut Usage in Multimodal Human-Computer Interaction
Digital Human Modeling and Design
/
Schaffer, Stefan
/
Schleicher, Robert
/
Möller, Sebastian
DHM 2011: 3rd International Conference on Digital Human Modeling
2011-07-09
p.337-346
Keywords: Multimodal HCI; User Modeling; Automated Usability Evaluation
Copyright © 2011 Springer-Verlag
Summary: Users of multimodal systems have to choose between different interaction
strategies. Thereby the number of interaction steps to solve a task can vary
across the available modalities. In this work we introduce such a task and
present empirical data that shows that strategy selection of users is affected
by modality specific shortcuts. The system under investigation offered touch
screen and speech as input modalities. We introduce a first version of an ACT-R
model that uses the architectures-inherent mechanisms production compilation
and utility learning to identify modality-specific shortcuts. A simple task
analysis is implemented in declarative memory. The model reasonably accurate
matches the human data. In our further work we will try to get a better fit by
extending the model with further influence factors of modality selection like
speech recognition errors. Further the model will be refined concerning the
cognitive processes of speech production and touch screen interaction.
I'm home: Defining and evaluating a gesture set for smart-home control
/
Kühnel, Christine
/
Westermann, Tilo
/
Hemmert, Fabian
/
Kratz, Sven
/
Müller, Alexander
/
Möller, Sebastian
International Journal of Human-Computer Studies
2011
v.69
n.11
p.693-704
10.1016/j.ijhcs.2011.04.005
Keywords: Gesture-based interaction / Smart-home / User-centered design / Mobile
device
© Copyright 2011 Elsevier Ltd.
Summary: Mobile phones seem to present the perfect user interface for interacting
with smart environments, e.g. smart-home systems, as they are nowadays
ubiquitous and equipped with an increasing amount of sensors and interface
components, such as multi-touch screens. After giving an overview on related
work this paper presents the adapted design methodology proposed by Wobbrock et
al. (2009) for the development of a gesture-based user interface to a
smart-home system. The findings for the new domain, device and gesture space
are presented and compared to findings by Wobbrock et al. (2009). Three
additional steps are described: A small pre-test survey, a mapping and a memory
test and a performance test of the implemented system.
This paper shows the adaptability of the approach described by Wobbrock et
al. (2009) for three-dimensional gestures in the smart-home domain. Elicited
gestures are described and a first implementation of a user interface based on
these gestures is presented.
Evaluating multimodal systems: a comparison of established questionnaires
and interaction parameters
Full papers
/
Kühnel, Christine
/
Westermann, Tilo
/
Weiss, Benjamin
/
Möller, Sebastian
Proceedings of the Sixth Nordic Conference on Human-Computer Interaction
2010-10-16
p.286-294
Keywords: evaluation, gesture, multimodal interaction, smart-home
© Copyright 2010 ACM
Summary: This paper describes the analysis of established and new questionnaires
concerning their applicability for the assessment of quality aspects of
multimodal systems. To this purpose, an experiment with 27 participants
interacting with a smart-home system via a voice interface, a smartphone-based
interface and a multimodal interface, was conducted. Interaction parameters
were assessed and related to constructs measured with these questionnaires. The
results indicate that some of the questionnaires are suitable for evaluating
multimodal interfaces. On the basis of correlations with interaction parameters
subscales of these questionnaires can be mapped to quality aspects, such as
effectiveness and efficiency. Recommendations are given how to meet two
important evaluation requirements, namely which questionnaire to use for
comparing two or more systems or system versions and how to identify factors or
components in a system that have to be improved. This is another step forward
to establish evaluation methods for multimodal systems.
Making it easier for older people to talk to smart homes: the effect of
early help prompts
Long Paper
/
Wolters, K. Maria
/
Engelbrecht, Klaus-Peter
/
Gödde, Florian
/
Möller, Sebastian
/
Naumann, Anja
/
Schleicher, Robert
Universal Access in the Information Society
2010
v.9
n.4
p.311-325
Keywords: Spoken dialogue systems; Usability; Older adults; Smart homes; Help prompts
© Copyright 2010 Springer-Verlag
Summary: It is well known that help prompts shape how users talk to spoken dialogue
systems. This study investigated the effect of help prompt placement on older
users' interaction with a smart home interface. In the dynamic help condition,
help was only given in response to system errors; in the inherent help
condition, it was also given at the start of each task. Fifteen older and
sixteen younger users interacted with a smart home system using two different
scenarios. Each scenario consisted of several tasks. The linguistic style users
employed to communicate with the system (interaction style) was measured using
the ratio of commands to the overall utterance length (keyword ratio) and the
percentage of content words in the user's utterance that could be understood by
the system (shared vocabulary). While the timing of help prompts did not affect
the interaction style of younger users, it was early task-specific help
supported older users in adapting their interaction style to the system's
capabilities. Well-placed help prompts can significantly increase the usability
of spoken dialogue systems for older people.
Reliable Evaluation of Multimodal Dialogue Systems
Multimodal User Interfaces
/
Metze, Florian
/
Wechsung, Ina
/
Schaffer, Stefan
/
Seebode, Julia
/
Möller, Sebastian
HCI International 2009: 13th International Conference on Human-Computer
Interaction, Part II: Novel Interaction Methods and Techniques
2009-07-19
v.2
p.75-83
Keywords: usability evaluation methods; multimodal interfaces
Copyright © 2009 Springer-Verlag
Summary: Usability evaluation is an indispensable issue during the development of new
interfaces and interaction paradigms [1]. Although a wide range of reliable
usability evaluation methods exists for graphical user interfaces, mature
methods are rarely available for speech-based interfaces [2]. When it comes to
multimodal interfaces, no standardized approach has so far been established. In
previous studies [3], it was shown that usability questionnaires initially
developed for unimodal systems may lead to unreliable results when applied to
multimodal systems. In the current study, we therefore used several data
sources (direct and indirect measurements) to evaluate two unimodal versions
and one multimodal version of an information system. We investigated, to which
extent the different data showed concordance for the three system versions. The
aim was to examine, if, and under which conditions, common and widely used
methods originally developed for graphical user interfaces are also appropriate
for speech-based and multimodal intelligent interfaces.
Usability Evaluation of Multimodal Interfaces: Is the Whole the Sum of Its
Parts?
Multimodal User Interfaces
/
Wechsung, Ina
/
Engelbrecht, Klaus-Peter
/
Schaffer, Stefan
/
Seebode, Julia
/
Metze, Florian
/
Möller, Sebastian
HCI International 2009: 13th International Conference on Human-Computer
Interaction, Part II: Novel Interaction Methods and Techniques
2009-07-19
v.2
p.113-119
Copyright © 2009 Springer-Verlag
Summary: Usability evaluation of multimodal systems is a complex issue. Multimodal
systems provide multiple channels to communicate with the system. Thus, the
single modalities as well as their combination have to be taken into account.
This paper aims to investigate how ratings of single modalities relate to the
ratings of their combination. Therefore a usability evaluation study was
conducted testing an information system in two unimodal versions and one
multimodal version. Multiple linear regression showed that for overall and
global judgments ratings of the single modalities are very good predictors for
the ratings of the multimodal system. For separate usability aspects (e.g.
hedonic qualities) the prediction was less accurate.
Comparison of Different Talking Heads in Non-Interactive Settings
Agents, Avatars and Personalisation
/
Weiss, Benjamin
/
Kühnel, Christine
/
Wechsung, Ina
/
Möller, Sebastian
/
Fagel, Sascha
HCI International 2009: 13th International Conference on Human-Computer
Interaction, Part III: Ambient, Ubiquitous and Intelligent Interaction
2009-07-19
v.3
p.349-357
Keywords: talking heads; evaluation; quality aspects; smart home domain
Copyright © 2009 Springer-Verlag
Summary: Six different talking heads have been evaluated in two consecutive
experiments. Two text-to-speech components and three head components have been
used. Results from semantic differentials show a clear preference for the most
human-like and expressive head. The analysis of the semantic differentials
reveals three factors each. These factors show different patterns for the head
components. Overall quality is strongly related to one factor, which covers the
quality aspect 'appearance'. Another factor found in both experiments comprises
'human likeliness' and 'naturalness' and is much less correlated with overall
quality. While subjects have been able to clearly separate all head components
with different factors of the semantic differential, only some of these factors
are relevant for explicit quality ratings. A good appearance seems to affect
the perception of sympathy and the ascription of reliability.
Evaluation of a Voice-Based Internet Browser with Untrained and Trained
Users
Language, Text, Voice, Sound, Images and Signs
/
Engelbrecht, Klaus-Peter
/
Wootton, Craig
/
Wechsung, Ina
/
Möller, Sebastian
UAHCI 2009: 5th International Conference on Universal Access in
Human-Computer Interaction, Part III: Applications and Services
2009-07-19
v.3
p.482-491
Keywords: web browsing; spoken dialog systems; Internet experience
Copyright © 2009 Springer-Verlag
Summary: In our paper, we present evaluation results for VoiceBrowse, an interactive
system allowing users to access content and services from the Internet via
voice control. We compare two user groups, inexperienced and experienced
computer users, regarding their performance and judgment of two versions of the
system differing in the dialog initiative. Furthermore we investigate the
usability of the systems after long-term usage (simulated by fifteen minutes
practise). We find that even inexperienced computer users know from the
beginning how to speak to the system, which contrasts assumptions in the
related literature. Inexperienced uses perform as good as experienced users
with both systems before and after the training. We also compare judgments of
the systems before and after the training.
Evaluating talking heads for smart home systems
Multimodal systems I (poster session)
/
Kühnel, Christine
/
Weiss, Benjamin
/
Wechsung, Ina
/
Fagel, Sascha
/
Möller, Sebastian
Proceedings of the 2008 International Conference on Multimodal Interfaces
2008-10-20
p.81-84
Keywords: multimodal ui, smart home environments, talking heads
© Copyright 2008 ACM
Summary: In this paper we report the results of a user study evaluating talking heads
in the smart home domain. Three noncommercial talking head components are
linked to two freely available speech synthesis systems, resulting in six
different combinations. The influence of head and voice components on overall
quality is analyzed as well as the correlation between them. Three different
ways to assess overall quality are presented. It is shown that these three are
consistent in their results. Another important result is that in this design
speech and visual quality are independent of each other. Furthermore, a linear
combination of both quality aspects models overall quality of talking heads to
a good degree.