[1]
Interactive Sonification Markup Language (ISML) for Efficient Motion-Sound
Mappings
Natural User Interfaces
/
Walker, James
/
Smith, Michael T.
/
Jeon, Myounghoon
HCI International 2015: 17th International Conference on HCI, Part II:
Interaction Technologies
2015-08-02
v.2
p.385-394
Keywords: Design research; Interactive sonification; Sonification markup language
© Copyright 2015 Springer International Publishing Switzerland
Summary: Despite rapid growth of research on auditory display and sonification
mapping per se, there has been little effort on efficiency or accessibility of
the mapping process. In order to expedite variations on sonification research
configurations, we have developed the Interactive Sonification Markup Language
(ISML). ISML is designed within the context of the Immersive Interactive
Sonification Platform (iISoP) at Michigan Technological University. We present
an overview of the system, the motivation for developing ISML, and the time
savings realized through its development. We then discuss the features of ISML
and its accompanying graphical editor, and conclude by summarizing the system's
feature development and future plans for its further enhancement. ISML is
expected to decrease repetitive development tasks for multiple research studies
and to increase accessibility to diverse sonification researchers who do not
have programming experience.
[2]
Development and Evaluation of Emotional Robots for Children with Autism
Spectrum Disorders
Children in HCI
/
Jeon, Myounghoon
/
Zhang, Ruimin
/
Lehman, William
/
Fakhrhosseini, Seyedeh
/
Barnes, Jaclyn
/
Park, Chung Hyuk
HCI International 2015: 17th International Conference on HCI: Posters'
Extended Abstracts, Part I
2015-08-02
v.4
p.372-376
Keywords: Social robotics; Emotion; Autism spectrum disorders
© Copyright 2015 Springer International Publishing Switzerland
Summary: Individuals with Autism Spectrum Disorders (ASD) often have difficulty
recognizing emotional cues in ordinary interaction. To address this, we are
developing a social robot that teaches children with ASD to recognize emotion
in the simpler and more controlled context of interaction with a robot. An
emotion recognition program using the Viola-Jones algorithm for facial
detection is in development. To better understand emotion expression by social
robots, a study was conducted with 11 college students matching animated facial
expressions and emotionally neutral sentences spoken in affective voices to
various emotions. Overall, facial expressions had greater recognition accuracy
and higher perceived intensity than voices. Future work will test the
recognition of combined face and voices.
[3]
Lyricon (Lyrics + Earcons) Improves Identification of Auditory Cues
Information Design
/
Sun, Yuanjing
/
Jeon, Myounghoon
DUXU 2015: Fourth International Conference on Design, User Experience, and
Usability, Part II: Users and Interactions
2015-08-02
v.2
p.382-389
Keywords: Auditory display; Auditory icons; Auditory user interface; Cognitive
mapping; Earcons; Lyricons; Sonification
© Copyright 2015 Springer International Publishing Switzerland
Summary: Auditory researchers have developed various non-speech cues in designing
auditory user interfaces. A preliminary study of "lyricons" (lyrics + earcons
[1]) has provided a novel approach to devising auditory cues in electronic
products, by combining the concurrent two layers of musical speech and earcons
(short musical motives). An experiment on sound-function meaning mapping was
conducted between earcons and lyricons. It demonstrated that lyricons
significantly more enhanced the relevance between the sound and the meaning
compared to earcons. Further analyses on error type and confusion matrix show
that lyricons showed a higher identification rate and a shorter mapping time
than earcons. Factors affecting auditory cue identification and application
directions of lyricons are discussed.
[4]
Sorry, I'm Late; I'm Not in the Mood: Negative Emotions Lengthen Driving
Time
Safety, Risk and Human Reliability
/
Jeon, Myounghoon
/
Croschere, Jayde
EPCE 2015: 12th International Conference on Engineering Psychology and
Cognitive Ergonomics
2015-08-02
p.237-244
Keywords: Aggressive driving; Anger; Driving simulation research; Emotions; Road rage;
Sadder but wiser
© Copyright 2015 Springer International Publishing Switzerland
Summary: A considerable amount of research has shown that anger degenerates driving
performance [e.g., 1, 2, 3], but little research has empirically shown other
affective effects on driving. To investigate angry and sad effects on driving,
we conducted a driving simulation study with induced affective states. In
cognitive psychology, there is the "sadder but wiser" phenomenon, but given
that driving is a complex, dynamic task that engages not only basic cognitive
processes, but also other critical elements such as decision making, action
selection, and motor control, it might result in different outcomes. Thirty-two
participants were induced into sad, angry, or neutral affective states and
asked to complete a driving task using a medium fidelity driving simulator.
Measures included driving performance, subjective mood ratings, and a NASA-TLX
workload index. Results showed that participants in the angry and sad
conditions took significantly more time to complete the driving task compared
to the neutral condition.
[5]
Robotic Sonification for Promoting Emotional and Social Interactions of
Children with ASD
Late-Breaking Reports -- Session 2
/
Zhang, Ruimin
/
Jeon, Myounghoon
/
Park, Chung Hyuk
/
Howard, Ayanna
Extended Abstracts of the 2015 ACM/IEEE International Conference on
Human-Robot Interaction
2015-03-02
v.2
p.111-112
© Copyright 2015 ACM
Summary: Deficiency in social interaction is one of the most crucial issues for
children with Autism Spectrum Disorder (ASD). To foster their emotional and
social communication, we have developed an orchestration robot platform. After
describing our concepts of the use of sonification in the intervention
sessions, we describe our efforts in developing a facial expression detection
system and implementing a platform-free sonification server system.
[6]
"Not All Visual Media Are Helpful": An Optimal Instructional Medium for
Effective Online Learning
Interactive Posters & Demos: POS2 -- Interactive Posters & Demos
/
Lehtola, Whitney I.
/
Gemignani, Stephen M.
/
Sutherland, Jared T.
/
Jeon, Myounghoon
Proceedings of the Human Factors and Ergonomics Society 2014 Annual Meeting
2014-10-27
p.1351-1355
doi 10.1177/1541931214581282
© Copyright 2014 HFES
Summary: With an increasing online learning population, many questions are arising as
to the best way of teaching online. A number of common methods incorporate
visual formats into the teaching method. Currently, an area lacking in research
is which visual format communicates material most effectively to students. In
this study, the focus was on discovering whether it is more effective to use an
audio-pictorial video rather than an audio-text video. Sixteen undergraduates
participated in this study. Each group was exposed to one of three test
conditions: audio-video, audio-text, or audio-only (control). The participants
were then asked to complete the task of making a unique paper airplane. As our
hypothesis, the results showed that the audio-video group had significantly
higher completion rates for the task than the other two groups, which showed no
difference from each other. Results are discussed in terms of cognitive load
theory and multiple resources theory, and a practical recommendation is
provided recommending the use of a live audio-video format to teach students
online.
[7]
How Emotions Influence Trust in Online Transactions Using New Technology
Internet: I4/CS -- Usable Interactions
/
Tislar, Catherine
/
Sterkenburg, Jason
/
Zhang, Wei
/
Jeon, Myounghoon
Proceedings of the Human Factors and Ergonomics Society 2014 Annual Meeting
2014-10-27
p.1531-1535
doi 10.1177/1541931214581319
© Copyright 2014 HFES
Summary: Online trust has recently become a critical issue, due to widely publicized
information leaks, account hacking, and privacy breaches. This study
investigates whether or not emotions have effects on trust in online
transactions, particularly when a new technology is involved. We explored the
effects of happiness and sadness on participants' choice of a payment method
for online transactions. Forty-four undergraduates participated in online
transactions with a prototype webpage after either happiness or sadness
induction, compared to a neutral group. Different emotion mechanisms would
predict different effects of each emotion. Results showed that when the item
cost was relatively low ($10), a higher percentage of participants in both
emotion conditions selected a novel payment method than those in a neutral
condition. With more expensive items ($50 and $100) the number of participants
who chose the new option equally increased across all conditions because
participants could benefit relatively a large amount of discount (10%) from the
novel payment method. Various emotion mechanisms are discussed with our
results.
[8]
If You're Angry, Turn the Music on: Music Can Mitigate Anger Effects on
Driving Performance
Podium Presentations: Driver emotions and physiological state
/
Fakhrhosseini, Seyedeh Maryam
/
Landry, Steven
/
Tan, Yin Yin
/
Bhattarai, Saru
/
Jeon, Myounghoon
AutomotiveUI 2014: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications
2014-09-17
v.1
n.7 pages
p.18
© Copyright 2014 ACM
Summary: Research has focused on music's negative effects on a driver's attention,
whereas little research has addressed the possibility of using music to reduce
emotional effects on driving. In the present study, we investigate how music
can mitigate the degenerated driving performance associated with angry driving.
To this end, fifty-three drivers participated in a simulated driving study
either with or without induced anger. Three groups of participants with induced
anger drove in a simulator while listening to happy or sad instrumental pieces,
or without music. In the control group, anger was not induced and they did not
listen to music during driving. The results show that participants who listened
to either happy or sad music had significantly fewer driving errors than those
who did not listen to music. However, no significant differences were found
between happy and sad music conditions. Results are discussed with an affect
regulation model and future research.
[9]
Social, Natural, and Peripheral Interactions: Together and Separate
Social, Natural, and Peripheral Interactions: Together and Separate
/
Riener, Andreas
/
Alvarez, Ignacio
/
Pfleging, Bastian
/
Löcken, Andreas
/
Jeon, Myhounghoon
/
Müller, Heiko
/
Chiesa, Mario
AutomotiveUI 2014: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications, Adjunct Proceedings
2014-09-17
v.2
n.6 pages
p.1
© Copyright 2014 ACM
Summary: A major challenge in the future of traffic is to understand how
"socially-aware vehicles" could be making use of their social habitus, formed
by any information that can be inferred from past and present social relations,
social interactions, and a driver's social state when exposed to other
participants in real, live traffic. The aim of this workshop in recognition of
this challenge is to advance on a common understanding of the symbiosis between
drivers, cars, and the infrastructure. The central objective of the workshop is
to provoke an active debate on the adequacy of the concept of social, natural,
and peripheral interaction, addressing questions such as "who can communicate
what", "when", "how", and "why"? To tackle these questions, we would like to
collect different, radical, innovative, versatile, and engaging works that
challenge or re-imagine human interactions in the near future automobile space.
[10]
Advanced Vehicle Sonification Applications
Social, Natural, and Peripheral Interactions: Together and Separate
/
Jeon, Myounghoon
AutomotiveUI 2014: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications, Adjunct Proceedings
2014-09-17
v.2
n.5 pages
p.3
© Copyright 2014 ACM
Summary: Visual displays are still mainly used in the in-vehicle context, but they
may be problematic for providing timely, appropriate feedback to drivers. To
compensate for the drawbacks of visual displays, multimodal displays have been
developed, but applied to limited areas (e.g., collision warning sounds). The
present paper introduces advanced vehicle sonification applications: Two of our
on-going projects (fuel efficiency sonification and driver emotion
sonification) and a plausible future project (nearby traffic sonification). In
addition, applicable sonification techniques and solutions are provided.
Sonification applications to these areas can be an effective, unobtrusive means
to increase drivers' situation awareness and engagement with driving, which
will lead to road safety. To successfully implement these applications,
iterative and intensive assessment of driver needs, effectiveness of the
application, and its impact on driver distraction and road safety should be
conducted.
[11]
Predictive parallelization: taming tail latencies in web search
Session 3b: indexing and efficiency
/
Jeon, Myeongjae
/
Kim, Saehoon
/
Hwang, Seung-won
/
He, Yuxiong
/
Elnikety, Sameh
/
Cox, Alan L.
/
Rixner, Scott
Proceedings of the 2014 Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval
2014-07-06
p.253-262
© Copyright 2014 ACM
Summary: Web search engines are optimized to reduce the high-percentile response time
to consistently provide fast responses to almost all user queries. This is a
challenging task because the query workload exhibits large variability,
consisting of many short-running queries and a few long-running queries that
significantly impact the high-percentile response time. With modern multicore
servers, parallelizing the processing of an individual query is a promising
solution to reduce query execution time, but it gives limited benefits compared
to sequential execution since most queries see little or no speedup when
parallelized. The root of this problem is that short-running queries, which
dominate the workload, do not benefit from parallelization. They incur a large
parallelization overhead, taking scarce resources from long-running queries. On
the other hand, parallelization substantially reduces the execution time of
long-running queries with low overhead and high parallelization efficiency.
Motivated by these observations, we propose a predictive parallelization
framework with two parts: (1) predicting long-running queries, and (2)
selectively parallelizing them. For the first part, prediction should be
accurate and efficient. For accuracy, we study a comprehensive feature set
covering both term features (reflecting dynamic pruning efficiency) and query
features (reflecting query complexity). For efficiency, to keep overhead low,
we avoid expensive features that have excessive requirements such as large
memory footprints. For the second part, we use the predicted query execution
time to parallelize long-running queries and process short-running queries
sequentially. We implement and evaluate the predictive parallelization
framework in Microsoft Bing search. Our measurements show that under moderate
to heavy load, the predictive strategy reduces the 99th-percentile response
time by 50% (from 200 ms to 100 ms) compared with prior approaches that
parallelize all queries.
[12]
Constructing the Immersive Interactive Sonification Platform (iISoP)
User Experience in Intelligent Environments
/
Jeon, Myounghoon
/
Smith, Michael T.
/
Walker, James W.
/
Kuhl, Scott A.
DAPI 2014: 2nd International Conference on Distributed, Ambient, and
Pervasive Interactions
2014-06-22
p.337-348
Keywords: design research; interactive sonification; interactivity; visualization
© Copyright 2014 Springer International Publishing
Summary: For decades, researchers have spurred research on sonification, the use of
non-speech audio to convey information [1]. With 'interaction' and 'user
experience' being pervasive, interactive sonification [2], an emerging
interdisciplinary area, has been introduced and its role and importance have
rapidly increased in the auditory display community. From this background, we
have devised a novel platform, "iISoP" (immersive Interactive Sonification
Platform) for location, movement, and gesture-based interactive sonification
research, by leveraging the existing Immersive Visualization Studio (IVS) at
Michigan Tech. Projects in each developmental phase and planned research are
discussed with a focus on "design research" and "interactivity".
[13]
Auditory Emoticons: Iterative Design and Acoustic Characteristics of
Emotional Auditory Icons and Earcons
Natural and Multimodal Interfaces
/
Sterkenburg, Jason
/
Jeon, Myounghoon
/
Plummer, Christopher
HCI International 2014: 16th International Conference on HCI, Part II:
Advanced Interaction Modalities and Techniques
2014-06-22
v.2
p.633-640
Keywords: auditory icons; earcons; auditory emoticons; non-speech sounds; sonification
© Copyright 2014 Springer International Publishing
Summary: In recent decades there has been an increased interest in sonification
research. Two commonly used sonification techniques, auditory icons and
earcons, have been the subject of a lot of study. However, despite this there
has been relatively little research investigating the relationship between
these sonification techniques and emotions and affect. Additionally, despite
their popularity, auditory icons and earcons are often treated separately and
are rarely compared directly in studies. The current paper shows iterative
design procedures to create emotional auditory icons and earcons. The ultimate
goal of the study is to compare auditory icons and earcons in their ability to
represent emotional states. The results show that there are some strong user
preferences both within sonification categories and between sonification
categories. The implications and extensions of this work are discussed.
[14]
Increasing Patient Compliance and Satisfaction With Physical Therapy
Web-Based Applications
Posters: POS2 -- Poster & Demo Interactive Session 2
/
Ellis, Katrina M.
/
Norman, Chad
/
Van der Merwe, Alex
/
Jeon, Myounghoon
Proceedings of the Human Factors and Ergonomics Society 2013 Annual Meeting
2013-09-30
p.1531-1535
doi 10.1177/1541931213571341
© Copyright 2013 HFES
Summary: Performing independent exercises after clinical visits are crucial for
patients recovering from injury. However, patients often fail to comply with
physical therapist prescriptions due to lack of time and lack of appropriate
feedback. The present study investigated the current medium used for prescribed
exercises and compared it to mediums used in web-based applications. In Phase
I, we surveyed thirteen practicing physical therapists and twenty-two patients
of physical therapy. Responses suggested that video instruction of exercises
and video-conference meetings between clinic visits would be beneficial to
patient rehabilitation. In Phase II, with fifty-eight undergraduate
participants, we examined the influence of self-efficacy and format of
instructional materials on willingness to comply, satisfaction with
information, and anxiety related to completing rehabilitation. We found that
video with text instructions were most satisfying to students. Results are
discussed with limitations of the present study and future works.
[15]
Sadder but Wiser? Effects of Negative Emotions on Risk Perception, Driving
Performance, and Perceived Workload
Surface Transportation: ST1 -- Health, Behavior, and Emotion
/
Jeon, Myounghoon
/
Zhang, Wei
Proceedings of the Human Factors and Ergonomics Society 2013 Annual Meeting
2013-09-30
p.1849-1853
doi 10.1177/1541931213571413
© Copyright 2013 HFES
Summary: Traditional affect research has frequently used a valence dimension --
positive and negative states. However, these approaches have not discriminated
the effects of distinct emotions of the same valence. Recent findings have
indicated that different emotions may have different impacts even though they
belong to the same valence. The current study consists of a simulated driving
experiment with two induced affective states to examine how sadness and anger
differently influence driving-related risk perception, driving performance, and
perceived workload. Thirty two undergraduates drove under three different road
conditions with induced sadness, anger, or neutral emotions. Participants in
both affect conditions showed significantly more errors than those in the
neutral condition. However, only participants with induced anger reported
significantly higher perceived workload than participants with neutral. Results
are discussed in terms of affect mechanisms and design directions for the
in-vehicle emotion regulation system.
[16]
The Ecological AUI (Auditory User Interface) Design and Evaluation of User
Acceptance for Various Tasks on Smartphones
Speech, Natural Language and Auditory Interfaces
/
Jeon, Myounghoon
/
Lee, Ju-Hwan
HCI International 2013: 15th International Conference on HCI, Part IV:
Interaction Modalities and Techniques
2013-07-21
v.4
p.49-58
Keywords: Auditory user interface; ecological user interface design; smartphones; user
acceptance
© Copyright 2013 Springer-Verlag
Summary: With the rapid development of the touch screen technology, some usability
issues of smartphones have been reported [1]. To tackle those user experience
issues, there has been research on the use of non-speech sounds on the mobile
devices [e.g., 2, 3-7]. However, most of them have focused on a single specific
task of the device. Given the varying functions of the smartphone, the present
study designed plausibly integrated auditory cues for diverse functions and
evaluated user acceptance levels from the ecological interface design
perspective. Results showed that sophisticated auditory design could change
users' preference and acceptance of the interface and the extent depended on
usage contexts. Overall, participants gave significantly higher scores on the
functional satisfaction and the fun scales in the sonically-enhanced
smartphones than in the no-sound condition. The balanced sound design may free
users from auditory pollution and allow them to use their devices more
pleasantly.
[17]
Designing Interactive Sonification for Live Aquarium Exhibits
Multimodal and Ambient Interaction
/
Jeon, Myounghoon
/
Winton, Riley J.
/
Henry, Ashley G.
/
Oh, Sanghun
/
Bruce, Carrie M.
/
Walker, Bruce N.
HCI International 2013: 15th International Conference on HCI: Posters'
Extended Abstracts Part I
2013-07-21
v.6
p.332-336
Keywords: Embodied interaction; interactive learning; interactive sonification;
interactivity; tangible objects
© Copyright 2013 Springer-Verlag
Best poster award
Summary: In response to the need for more accessible and engaging informal learning
environments (ILEs), researchers have studied sonification for use in
interpretation of live aquarium exhibits. The present work attempts to
introduce more interactivity to the project's existing sonification work, which
is expected to lead to more accessible and interactive learning opportunities
for visitors, including children and people with vision impairment. In this
interactive sonification environment, visitors can actively experience an
exhibit by using tangible objects to mimic the movement of animals.
Sonifications corresponding to their movement can be paired with real-time
animal-based sonifications produced by the existing system to generate a
musical fugue. In the current paper, we describe the system configurations,
experiment results for optimal sonification parameters and interaction levels,
and implications in terms of embodied interaction and interactive learning.
[18]
Lyricons (Lyrics + Earcons): Designing a New Auditory Cue Combining Speech
and Sounds
Multimodal and Ambient Interaction
/
Jeon, Myounghoon
HCI International 2013: 15th International Conference on HCI: Posters'
Extended Abstracts Part I
2013-07-21
v.6
p.342-346
Keywords: Auditory displays; lyricons; speech sounds; non-speech sounds
© Copyright 2013 Springer-Verlag
Summary: To complement visual displays, auditory researchers have developed various
auditory cues such as auditory icons, earcons, spearcons, and spindex cues.
Even though those auditory cues were successfully applied to a number of
electronic devices, they still require some improvements. From this background,
the present work introduces more intuitive and fun auditory cues, "Lyricons
(Lyrics + Earcons), which integrate the benefits of speech (i.e., accuracy) and
earcons (i.e., aesthetics). We categorized functions of electronic products
into meta-functional groups and devised a plausible earcon set for each
functional group. Nine students conducted the sound card sorting task to match
earcons with functional groups and brainstormed to generate lyrics for each
functional group. Based on the results, several lyricon sets were created and
improvements and application directions were discussed in focus group sessions.
The use of lyricons is expected to increase accessibility to electronic devices
for multiple users, including novices, older adults, children, and people with
vision impairment.
[19]
Workload Characterization and Performance Implications of Large-Scale Blog
Servers
/
Jeon, Myeongjae
/
Kim, Youngjae
/
Hwang, Jeaho
/
Lee, Joonwon
/
Seo, Euiseong
ACM Transactions on The Web
2012-11
v.6
n.4
p.17
© Copyright 2012 ACM
Summary: With the ever-increasing popularity of Social Network Services (SNSs), an
understanding of the characteristics of these services and their effects on the
behavior of their host servers is critical. However, there has been a lack of
research on the workload characterization of servers running SNS applications
such as blog services. To fill this void, we empirically characterized
real-world Web server logs collected from one of the largest South Korean blog
hosting sites for 12 consecutive days. The logs consist of more than 96 million
HTTP requests and 4.7TB of network traffic. Our analysis reveals the following:
(i) The transfer size of nonmultimedia files and blog articles can be modeled
using a truncated Pareto distribution and a log-normal distribution,
respectively; (ii) user access for blog articles does not show temporal
locality, but is strongly biased towards those posted with image or audio
files. We additionally discuss the potential performance improvement through
clustering of small files on a blog page into contiguous disk blocks, which
benefits from the observed file access patterns. Trace-driven simulations show
that, on average, the suggested approach achieves 60.6% better system
throughput and reduces the processing time for file access by 30.8% compared to
the best performance of the Ext4 filesystem.
[20]
Spearcons Improve Navigation Performance and Perceived Speediness in Korean
Auditory Menus
Perception and Performance: PP5 -- Auditory -- Visual Displays
/
Suh, Hyewon
/
Jeon, Myounghoon
/
Walker, Bruce N.
Proceedings of the Human Factors and Ergonomics Society 2012 Annual Meeting
2012-10-22
p.1361-1365
doi 10.1177/1071181312561390
© Copyright 2012 HFES
Summary: For decades, auditory menus using both speech (usually text-to-speech, TTS)
and non-speech sounds have been extensively studied. Researchers have developed
situation-optimized auditory menus involving such cues as auditory icons,
earcons, spearcons, and spindex. Spearcons have generally outperformed other
cues in terms of providing both contextual information and item-specific
information. However, little research has been devoted to exploration of
spearcons in languages other than English, or the use of spearcon-only auditory
menus. In this study, we evaluated the use of spearcons in Korean menus, as
well as the use of spearcons alone. Twenty-five native Korean speakers
navigated through a two-dimensional auditory menu presented via TTS, with or
without spearcon enhancements. Korean spearcons were successful. Participants
also rated the spearcon-enhanced menu as seeming speedier and more fun than the
TTS-only menu. After a short learning period, mean time-to-target in the
auditory menu was even faster with spearcons alone, compared to traditional
TTS-only menus.
[21]
Cross-cultural differences in the use of in-vehicle technologies and vehicle
area network services: Austria, USA, and South Korea
Multimodal interaction
/
Jeon, Myounghoon
/
Riener, Andreas
/
Lee, Ju-Hwan
/
Schuett, Jonathan
/
Walker, Bruce N.
AutomnotiveUI 2012: International Conference on Automotive User Interfaces
and Interactive Vehicular Applications
2012-10-17
p.163-170
© Copyright 2012 ACM
Summary: Vehicle area network (VAN) communications and related services are getting
more pervasive [1]. However, even though user-centered design has been
emphasized, VAN services have often been developed through a technology-driven
approach. This paper presents cross-cultural survey results on VAN services in
three different countries: Austria, USA, and South Korea. The current research
compared the state-of-the-art of drivers' current in-vehicle technology use and
investigated their needs and wants for plausible new services in the near
future. Further, we validated our next generation in-vehicle interface concepts
stemming from our previous participatory design process [2]. Results showed
clear differences between Austrians vs. Americans and Koreans. Even though
Koreans and Americans in our survey were older than Austrians, they seemed more
open-minded to VAN services (e.g., social networks in car, V2V services,
in-vehicle agent, etc) in general and rated them more positively. Through these
cross-cultural needs analyses of end users, designers and practitioners are
expected to gain insights into developing a standardized service across
cultures as well as culturally tuned in-vehicle interfaces. Moreover, we hope
that this initial international collaboration can serve as a good test bed for
future research and hope to expand our consortium with more colleagues in the
AutomotiveUI community for further cross-cultural studies.
[22]
Methodical approaches to prove the effects of subliminal perception in
ubiquitous computing environments
Methodical Approaches to Prove the Effects of Subliminal Perception in
Ubiquitous Computing Environments
/
Riener, Andreas
/
Reiner, Miriam
/
Jeon, Myounghoon
/
Chalfoun, Pierre
Proceedings of the 2012 International Conference on Ubiquitous Computing
2012-09-05
p.1120-1121
© Copyright 2012 ACM
Summary: To cope with the rising volume of information in human-computer interfaces,
explicit and attentive interaction is more and more frequently replaced by
implicit means of information exchange, supported by context-and activity-aware
systems and applications. The trend of excessive information is, however, still
ongoing, calling for further solutions to reduce a persons cognitive load or
level of attention. Subliminal interaction techniques are considered a
promising approach to deliver information to a person without causing much
supplementary workload. This workshop aims at discussing the potential of
subliminal perception to improve the information flow for human-computer
interaction in the light of the fact that, up to now, the results have been
mixed. One group of researchers has provided evidence that subliminal
stimulation works, but the other has found that it does not, or even cannot,
work. To clarify this issue, experts from various domains attending the
workshop will discuss how subliminal effects can be scientifically supported or
how a certain claim could be empirically refuted.
[23]
The role of subliminal perception in vehicular interfaces
Methodical Approaches to Prove the Effects of Subliminal Perception in
Ubiquitous Computing Environments
/
Riener, Andreas
/
Jeon, Myounghoon
Proceedings of the 2012 International Conference on Ubiquitous Computing
2012-09-05
p.1122-1126
© Copyright 2012 ACM
Summary: Following laws and provisions passed on the national and international
level, the most relevant goal of future traffic and vehicular interfaces is to
increase road safety. To alleviate the cognitive load associated with the
interaction with the variety of emerging information and assistance systems in
the car, subliminal stimulation is assumed to be a promising technique. To
assess the potential of subliminal cues that could be used as their interaction
means in future vehicles, we have organized a workshop within the frame of the
automotive user interfaces conference (AutoUI 2011) to discuss this topic in a
group of experts. This paper summarizes the findings from that workshop and
should give researchers a starting point for their own activities in the field
by indicating sort of grand research challenges and most critical issues. In
particular, the goal of this summary article is to make this challenging
research field more 'tangible' for researchers working in a range of
disciplines, such as engineering, neuroscience, computer science, and
psychophysiology. While currently discussed in the automotive domain only, the
principles, research questions, and findings could immediately (and easily) be
transferred to and adopted in other research fields. Interaction based on
subliminal techniques can have an impact on society at large, making
significant contributions toward a more natural, convenient, and even relaxing
future style of interaction with any complex systems.
[24]
A systematic approach to using music for mitigating affective effects on
driving performance and safety
Methodical Approaches to Prove the Effects of Subliminal Perception in
Ubiquitous Computing Environments
/
Jeon, Myounghoon
Proceedings of the 2012 International Conference on Ubiquitous Computing
2012-09-05
p.1127-1132
© Copyright 2012 ACM
Summary: Research has shown that affective effects on driving performance and safety
are as dangerous as (or even more dangerous than) effects of the secondary
tasks [11]. There has been some research on the use of speech-based systems for
the intervention, but little research on the use of music has attempted to
mitigate a driver's affective states while driving. The current paper
identifies various taxonomies of the effects of music and explores plausible
research variables, considerations, and practical application directions.
[25]
"Spindex" (Speech Index) Enhances Menus on Touch Screen Devices with
Tapping, Wheeling, and Flicking
/
Jeon, Myounghoon
/
Walker, Bruce N.
/
Srivastava, Abhishek
ACM Transactions on Computer-Human Interaction
2012-07
v.19
n.2
p.14
© Copyright 2012 ACM
Summary: Users interact with many electronic devices via menus such as auditory or
visual menus. Auditory menus can either complement or replace visual menus. We
investigated how advanced auditory cues enhance auditory menus on a smartphone,
with tapping, wheeling, and flicking input gestures. The study evaluated a
spindex (speech index), in which audio cues inform users where they are in a
menu; 122 undergraduates navigated through a menu of 150 songs. Study variables
included auditory cue type (text-to-speech alone or TTS plus spindex), visual
display mode (on or off), and input gesture (tapping, wheeling, or flicking).
Target search time and subjective workload were lower with spindex than without
for all input gestures regardless of visual display mode. The spindex condition
was rated subjectively higher than plain speech. The effects of input method
and display mode on navigation behaviors were analyzed with the two-stage
navigation strategy model. Results are discussed in relation to attention
theories and in terms of practical applications.