HCI Bibliography Home | HCI Conferences | AutomotiveUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AutomotiveUI Tables of Contents: 091011121314-114-2

AutomnotiveUI 2013: International Conference on Automotive User Interfaces and Interactive Vehicular Applications

Fullname:Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Editors:Jacques Terken
Location:Eindhoven, Netherlands
Dates:2013-Oct-28 to 2013-Oct-30
Publisher:ACM
Standard No:ISBN: 978-1-4503-2478-6; ACM DL: Table of Contents; hcibib: AutomotiveUI13
Papers:41
Pages:307
Links:Conference Website
  1. Interaction techniques 1 -- gesturing
  2. Interaction techniques 2 -- pointing
  3. Measuring and reducing distraction
  4. Multimodal interaction
  5. Texting and calling
  6. Driver modelling
  7. Methodology
  8. Experience
  9. Posters

Interaction techniques 1 -- gesturing

Standardization of the in-car gesture interaction space BIBAFull-Text 14-21
  A. Riener; A. Ferscha; F. Bachmair; P. Hagmüller; A. Lemme; D. Muttenthaler; D. Pühringer; H. Rogner; A. Tappe; F. Weger
Driven by technological advancements, gesture interfaces have recently found their way into vehicular prototypes of various kind. Unfortunately, their application is less than perfect and detailed information about preferred gesture execution regions, spatial extent, and time behavior are not available yet. Providing car (interior) manufacturer with gesture characteristics would allow them to design future in-vehicle concepts in a way to not interfere with gestural interaction. To tackle the problem, this research aims as preliminary work for a later standardization of the diverse properties of gestures and gesture classes similarly to what is already standardized in norms such as ISO 3958/4040 for placement and reachability of traditional controls and indicators. We have set up a real driving experiment recording trajectories and time behavior of gestures related to car and media control tasks. Data evaluation reveals that most of the subjects perform gestures in the same region (bounded by a "triangle" steering wheel, rear mirror, and gearshift) and with similar spatial extent (on average below 2 sec.). The generated density plots can be further used for an initial discussion about gesture execution in the passenger compartment. The final aim is to propose a new standard on permitted gesture properties (time, space) in the car.
A study of unidirectional swipe gestures on in-vehicle touch screens BIBAFull-Text 22-29
  Gary Burnett; Elizabeth Crundall; David Large; Glyn Lawson; Lee Skrypchuk
Touch screens are increasingly used within modern vehicles, providing the potential for a range of gestures to facilitate interaction under divided attention conditions. This paper describes a study aiming to understand how drivers naturally make swipe gestures in a vehicle context when compared with a stationary setting. Twenty experienced drivers were requested to undertake a swipe gesture on a touch screen in a manner they felt was appropriate to execute a wide range of activate/deactivate, increase/decrease and next/previous tasks. All participants undertook the tasks when either driving within a right-hand drive, medium-fidelity simulator or whilst sitting stationary. Consensus emerged in the direction of swipes made for a relatively small number of increase/decrease and next/previous tasks, particularly related to playing music. The physical action of a swipe made in different directions was found to affect the length and speed of the gesture. Finally, swipes were typically made more slowly in the driving situation, reflecting the reduced resources available in this context and/or the handedness of the participants. Conclusions are drawn regarding the future design of swipe gestures for interacting with in-vehicle touch screens.
Opportunistic synergy: a classifier fusion engine for micro-gesture recognition BIBAFull-Text 30-37
  Leonardo Angelini; Francesco Carrino; Stefano Carrino; Maurizio Caon; Denis Lalanne; Omar Abou Khaled; Elena Mugellini
In this paper, we present a novel opportunistic paradigm for in-vehicle gesture recognition. This paradigm allows using two or more subsystems in a synergistic manner: they can work in parallel but the lack of some of them does not compromise the functioning of the whole system. In order to segment and recognize micro-gestures performed by the user on the steering wheel, we combine a wearable approach based on the electromyography of the user's forearm muscles, with an environmental approach based on pressure sensors integrated directly on the steering wheel. We present and analyze several fusion methods and gesture segmentation strategies. A prototype has been developed and evaluated with data from nine subjects. The results prove that the proposed opportunistic system performs equal or better than each stand-alone subsystem while increasing the interaction possibilities.

Interaction techniques 2 -- pointing

Free-hand pointing for identification and interaction with distant objects BIBAFull-Text 40-47
  Sonja Rümelin; Chadly Marouane; Andreas Butz
In this paper, we investigate pointing as a lightweight form of gestural interaction in cars. In a pre-study, we show the technical feasibility of reliable pointing detection with a depth camera by achieving a recognition rate of 96% in the lab. In a subsequent in-situ study, we let drivers point to objects inside and outside of the car while driving through a city. In three usage scenarios, we studied how this influenced their driving objectively, as well as subjectively. Distraction from the driving task was compensated by a regulation of driving speed and did not have a negative influence on driving behaviour. Our participants considered pointing a desirable interaction technique in comparison to current controller-based interaction and identified a number of additional promising use cases for pointing in the car.
How to make large touch screens usable while driving BIBAFull-Text 48-55
  Sonja Rümelin; Andreas Butz
Large touch screens are recently appearing in the automotive market, yet their usability while driving is still controversial. Flat screens do not provide haptic guidance and thus require visual attention to locate interactive elements that are displayed. Thus, we need to think about new concepts to minimize the visual attention needed for interaction, to keep the driver's focus on the road and ensure safety.
   In this paper, we explore three different approaches. The first one is designed to make use of proprioception. The second approach incorporates physical handles to ease orientation on a large flat surface. In the third approach, directional touch gestures are applied. We describe the results of a comparative study that investigates the required visual attention as well as task performance and perceived usability, in comparison to a state-of-the-art multifunctional controller.
   We found that direct touch buttons provide the best results regarding task completion time, but with a size of about 6x8 cm, they were not yet large enough for blind interaction. Physical elements in and around the screen space were regarded useful to ease orientation. With touch gestures, participants were able to reduce visual attention to a lower level than with the remote controller. Considering our findings, we argue that there are ways to make large screens more appropriate for in-car usage and thus harness the advantages they provide in other aspects.
Driver queries using wheel-constrained finger pointing and 3-D head-up display visual feedback BIBAFull-Text 56-62
  Kikuo Fujimura; Lijie Xu; Cuong Tran; Rishabh Bhandari; Victor Ng-Thow-Hing
With the capability of fast, wireless communication, combined with cloud and location-based services, modern drivers can potentially access a wide variety of information about their automobile's environment. This paper presents a system for information query by the driver by using a simple pointing mechanism, combined with visual feedback in the form of a 3-D Head-up Display (3D-HUD). Because of its 3-D properties, the HUD can also be used for Augmented Reality (AR) as it allows physical elements in the driver's field of view to be annotated with computer graphics. The combination of simple natural user input tailored for the constraints of the driver with a see-thru 3D-HUD allows drivers to query information while minimizing visual and manual distraction.

Measuring and reducing distraction

Advanced auditory cues on mobile phones help keep drivers' eyes on the road BIBAFull-Text 66-73
  Thomas M. Gable; Bruce N. Walker; Haifa R. Moses; Ramitha D. Chitloor
In-vehicle technologies can create dangerous situations through driver distraction. In recent years, research has focused on driver distraction through communications technologies, but others, such as scrolling through a list of songs or names, can also carry high attention demands. Research has revealed that the use of advanced auditory cues for in-vehicle technology interaction can decrease cognitive demand and improve driver performance when compared to a visual-only system. This paper discusses research investigating the effects of applying advanced auditory cues to a search task on a mobile device while driving, particularly focusing on visual fixation. Twenty-six undergraduates performed a search task through a list of 150 songs on a cell phone while performing the lane change task, wearing eye-tracking glasses. Eye-tracking data, performance, workload, and preferences for six conditions were collected. Compared to no sound, visual fixation time on driving and preferences were found to be significantly higher for the advanced auditory cue of spindex. Results suggest more visual availability for driving when the spindex cue is applied to the search task and provides further evidence that these advanced auditory cues can lessen distraction from driving while using mobile devices to search for items in lists.
ADAS HMI using peripheral vision BIBAFull-Text 74-81
  Sabine Langlois
We propose to enhance utility of Advance Driver Assistance Systems (ADAS) with an interface that creates luminous signals able to be handled by peripheral vision while driving. The system, called Lighting Peripheral Display (LPD), consists of a box illuminated by LEDs whose light is reflected onto the windscreen. The shapes of the box are designed so that reflections can easily match the problems signaled by the ADAS. Surface, colors and movements are modulated to graduate urgency and to discriminate between the different assistance systems.
   A user test has been done on a driving simulator to compare a cluster with and without LPD. Both subjective and objective data (oculometry, vehicle parameters) were collected. They show that driving performance and comfort are enhanced by LPD. Reaction time is reduced for the most frequent warnings; perceived utility of ADAS is increased. However, driver's eyes tend to look at LPD instead of the cluster; peripheral vision utilization is thus not validated but, as ocular path is smaller with LPD, it helps the driver to keep his vision on the road.
Visual-manual in-car tasks decomposed: text entry and kinetic scrolling as the main sources of visual distraction BIBAFull-Text 82-89
  Tuomo Kujala; Johanna Silvennoinen; Annegret Lasch
Distraction effects of in-car tasks with a touch screen based navigation system user interface were studied in a driving simulator experiment with eye tracking. The focus was to examine which particular in-car task components visually distract drivers the most. The results indicate that all of the visual-manual in-car tasks led to increased levels of experienced demands and to lower driving speeds. The most significant finding was that text entry and kinetic scrolling of lists were the main sources of visual distraction whereas simple selection tasks with familiar target locations led to least severe distraction effects.

Multimodal interaction

Evaluating multimodal driver displays of varying urgency BIBAFull-Text 92-99
  Ioannis Politis; Stephen Brewster; Frank Pollick
Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task.
Comparing three novel multimodal touch interfaces for infotainment menus BIBAFull-Text 100-107
  Richard Swette; Keenan R. May; Thomas M. Gable; Bruce N. Walker
Three novel interfaces for navigating a hierarchical menu while driving were experimentally evaluated. Prototypes utilized redundant visual and auditory feedback (multimodal), and were compared to a conventional direct touch interface. All three multimodal prototypes employed an external touchpad separate from the infotainment display in order to afford simple eyes-free gesturing. Participants performed a basic driving task while concurrently using these prototypes to perform menu selections. Mean lateral lane deviation, eye movements, secondary task speed, and self-reported workload were assessed for each condition. Of all conditions, swiping the touchpad to move one-by-one through menu items yielded significantly smaller lane deviations than direct touch. In addition, in the serial swipe condition, the same time spent looking at the prototype was distributed over a longer interaction time. The remaining multimodal conditions allowed users to feel around a pie or list menu to find touchpad zones corresponding to menu items, allowing for either exploratory browsing or shortcuts. This approach, called GRUV, was ineffective compared to serial swiping and direct touch, possibly due its uninterruptable interaction pattern and overall novelty. The proposed explanation for the performance benefits of the serial swiping condition was that it afforded flexible sub tasking and incremental progress, in addition to providing multimodal output.
Using speech, GUIs and buttons in police vehicles: field data on user preferences for the Project54 system BIBAFull-Text 108-113
  W. Thomas Miller; Andrew L. Kun
The Project54 mobile system for law enforcement developed at the University of New Hampshire integrates the control of disparate law enforcement devices such as radar, VHF radio, video, and emergency lights and siren. In addition it provides access to state and national law enforcement databases via wireless data queries. Officers using Project54 are free to inter-mix three different user interface modes: the device native controls; an LCD touchscreen with keyboard and mouse; and voice commands with voice feedback. The Project54 system was utilized by the New Hampshire State Police agency wide for a period of seven years spanning 2005 through 2011. This paper presents an analysis of user preferences in regard to user interface modes during the three years 2009 through 2011, obtained through logs of daily system use in approximately 200 police cruisers. Results indicate that most officers chose to use the touch screen controls frequently instead of the device native controls, but only a minority chose to use the speech command interface.
International evaluation of NLU benefits in the domain of in-vehicle speech dialog systems BIBAFull-Text 114-120
  Linn Hackenberg; Sara Bongartz; Christian Härtle; Paul Leiber; Thorb Baumgarten; Jo Ann Sison
An in-vehicle speech dialog system (SDS) can support visual-haptic interfaces and reduce eyes-off-road time while driving. This work evaluates two SDS that varied according to the degree of natural language understanding afforded by the speech dialog system. In a Wizard of Oz simulation, two alternative SDSs were tested in a driving simulator. The Lane Change Test was used to compare a command and control system with a system supporting natural language input. This driving simulator study was conducted using the same setup in Germany, USA, and China. 40 participants per country were instructed to perform interaction tasks from contexts like media, telephone, and navigation. The results show that natural language SDS could lead to a faster and more intuitive way of interacting with in-vehicle SDS. US and Chinese users especially preferred the natural language enabled system over the command and control system.

Texting and calling

Texting while driving: is speech-based texting less risky than handheld texting? BIBAFull-Text 124-130
  Jibo He; Alex Chaparro; Bobby Nguyen; Rondell Burge; Joseph Crandall; Barbara Chaparro; Rui Ni; Shi Cao
Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld texting on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary texting task. Results showed that both speech-based and handheld texting impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld texting also increased the brake response time and increased variation in headway distance. Texting using a speech-based cell phone was less detrimental to driving performance than handheld texting. Nevertheless, the speech-based texting task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based interaction disrupts driving, but reduces the levels of performance interference compared to handheld devices. In addition, the difference in the distraction effect caused by speech-based and handheld texting is not simply due to the difference in task duration.
Exploring user expectations for context and road video sharing while calling and driving BIBAFull-Text 132-139
  Bastian Pfleging; Stefan Schneegass; Albrecht Schmidt
Calling while driving a car has become very common since the rise of mobile phones. Drivers use their phone despite the fact that calling in the car is potentially distracting and dangerous. Prohibiting communication while driving is not a good idea as there are also positive effects of calling (e.g., ability to notify about a delay, staying awake, preventing fatigue, guidance at foreign places).
   In contrast to passengers in the car, remote phone callers do not know any context details about the driver besides transmitted background noise. Using driving-related context information and live images allows to create situation awareness for the caller outside of the car and share a passenger-like view of car, road, and traffic conditions. In this paper, we explore drivers' and callers' expectations and reservations towards context and video sharing before and during phone calls. First, we explored which data can be shared between callers and drivers. Based on a web survey conducted with 123 participants, we evaluate the callers' and drivers' attitudes towards sharing of such information. We then conducted separate interviews with various drivers to get deeper insights about their attitudes towards sharing context information while driving and their expectations towards systems that provide such features. We found that automatic context and video sharing is less preferred than situation-based sharing. If drivers like the idea of video sharing, they also assume that it would have a positive influence on driving.

Driver modelling

Automated driving aids: modeling, analysis, and interface design considerations BIBAFull-Text 142-149
  Michael Heymann; Asaf Degani
There have been rapid advances in control and automated driving aids in today's cars, with a concomitant rise in the breadth and complexity of driver interaction. Thus there is a need for a clear, consistent, logical, and holistic design methodology to the design and analysis of the driver-vehicle interaction environment. Such design method should also take into account expected future technological enhancements and advances. In this paper we present some emerging automated driving aids that are currently implemented in modern vehicles and those that are anticipated in the coming years. We focus on three automation features -- adaptive cruise control (ACC), Lane Centering (LC), and Full Speed Range ACC (FSRA). The design methodology is based on formal modeling of the functionality and user interaction of these systems and analysis of their corresponding user interfaces. The approach demonstrated here is valid for a wide range of user interaction systems where operational complexity requires careful and verifiable design.
A data set of real world driving to assess driver workload BIBAFull-Text 150-157
  Stefan Schneegass; Bastian Pfleging; Nora Broy; Frederik Heinrich; Albrecht Schmidt
Driving a car is becoming increasingly complex. Many new features (e.g., for communication or entertainment) that can be used in addition to the primary task of driving a car increase the driver's workload. Assessing the driver's workload, however, is still a challenging task. A variety of means are explored which rather focus on experimental conditions than on real world scenarios (e.g., questionnaires). We focus on physiological data that may be assessed in an non-obtrusive way in the future and is therefore applicable in the real world.
   Hence, we conducted a real world driving experiment with 10 participants measuring a variety of physiological data as well as a post-hoc video rating session. We use this data to analyze the differences in the workload in terms of road type as well as especially important parts of the route such as exits and on-ramps. Furthermore, we investigate the correlation between the objective assessed and subjective measured data.
The effect of cognitive load on adaptation to differences in steering wheel force feedback level BIBAFull-Text 158-164
  Swethan Anand; Jacques Terken; Jeroen Hogema
In an earlier study it was found that drivers can adjust quickly to different force feedback levels on the steering wheel, even for such extreme levels as zero feedback. It was hypothesized that, due to lack of cognitive load, participants could easily and quickly learn how to deal with extreme force feedback settings by giving more effort. The study presented in this paper tested this hypothesis by increasing cognitive load by means of an N-back secondary task with the experimental conditions used in the earlier study. The secondary task was performed simultaneously while driving a simulated vehicle with six different force feedback levels provided on the steering wheel. Driving performance was measured using standard metrics such as standard deviation of lateral position, standard deviation of steering wheel angle, steering wheel reversal rate and mean driving speed. It was found that addition of the secondary task affected driving performance for the six different force feedback levels to an equal extent and did not differentially affect performance for the extreme levels of force feedback. Thus, the results do not provide support for the proposed hypothesis.

Methodology

The car data toolkit: smartphone supported automotive HCI research BIBAFull-Text 168-175
  David Wilfinger; Martin Murer; Axel Baumgartner; Christine Döttlinger; Alexander Meschtscherjakov; Manfred Tscheligi
Automobiles are environments rich of data that have a high potential to be used as input for research and design. Various difficulties accessing car data exclude HCI experts from making use of this data in novel interfaces and research projects. We present the CarDaT (Car Data Toolkit) that uses Android smartphones to provide multidimensional sensor data in a minimal invasive way. CarDaT combines smartphone sensor data with data sources like OBD-II as well as other easily available remote data (e.g., weather). This data and the provided connectivity enable researchers to gather data on human behavior and designers to create novel context-aware interface solutions. Thus, CarDaT offers a low-cost, manufacturer independent and scalable in-car agile prototyping and research environment. In this paper we describe how we used smartphones in the CarDaT as tools for automotive research and design. We demonstrate potentials of CarDaT by describing three applications that we developed with the toolkit, namely rear seat games, an experience sampling study, as well as an experiment using car data in a driver distraction study to inform design.
Measuring linguistically-induced cognitive load during driving using the ConTRe task BIBAFull-Text 176-183
  Vera Demberg; Asad Sayeed; Angela Mahr; Christian Müller
This paper shows that fine-grained linguistic complexity has measurable effects on cognitive load with consequences for the design of in-car spoken dialogue systems. We used synthesized German sentences with grammatical ambiguities to test the additional workload caused by human sentence processing during driving. For the driving task, we used the Continuous Tracking and Reaction (ConTRe) task, which we believe is suitable for the measurement of the fine-grained effects of linguistically-related workload phenomena in automotive environments, as it provides millisecond-level driving deviation measurements on a continuous course. We applied the task in an eye-tracking environment, using a pupillometric measure of cognitive workload called the Index of Cognitive Activity (ICA).
Standard definitions for driving measures and statistics: overview and status of recommended practice J2944 BIBAFull-Text 184-191
  Paul Green
This paper summarizes Society of Automotive Engineers (SAE) Recommended Practice J2944, Driving Performance Measures and Statistics (draft of February 12, 2013). This Practice was written because commonly used measures and statistics are (1) not named consistently (a lane departure is also called a lane exceedance, lane bust, line crossing, etc.), (2) rarely defined, and (3) when defined, are not defined consistently. Is a lane departure when any tire (or just a front tire) touches the lane marking, covers the lane marking, or goes beyond the lane marking, or is it something else? Uncertainty about what was measured makes comparing driving studies extremely difficult. Therefore, the aim is to require those submitting driving-related papers to conferences (such as this one) and journals to cite this Practice and which version of each measure and statistic was used.
   The current draft contains definitions for more than 50 measures and statistics, along with definitions for supporting terms and measurement guidelines. Terms defined include longitudinal response time measures (e.g., until accelerator pedal release, until brake pedal contact, until brake jerk), longitudinal vehicle measures (e.g., time gap, CG distance headway, range, time to collision), lateral response measures and statistics (e.g., steering reaction time, number of steering reversals, steering entropy), and lateral control measures and statistics (e.g., roadway departure, lane departure, time to line crossing, number of lane changes). In addition to a definition, application guidance for each measure/statistic is provided, as well as key references in which the measure/statistic was defined or used, and if available, a distribution for each measure/statistic from naturalistic driving. For most measures and statistics, there are multiple definitions from among which a user can choose, each reflecting a different current practice.

Experience

Measurement of momentary user experience in an automotive context BIBAFull-Text 194-201
  Moritz Körber; Klaus Bengler
Increased competition between manufacturers made it necessary for them to create products that provide great experiences in order to stand out from the rest of the market. For a positive product evaluation by the user, not only usability, but also user experience plays an important role. Recent research revealed that the most common method of using post-test questionnaires to assess user experience as a remembered episode could miss information about the experience [19]. This study presents a method for measuring user experience through fulfillment of psychological needs momentarily ('momentary UX') in situations instead of in a whole interaction episode. 28 participants used a novel navigation device that is designed to help explore the environment during a leisure car ride as a team together with other passengers. A story approach was used to create a context for the experimental product interaction. The results show that psychological needs relatedness and stimulation addressed by design could be measured specifically and momentarily for relevant events during the interaction. Further, a positive relationship to positive feelings in the interaction was found. Beyond that, a similar questionnaire for the whole episode is validated in this interaction and comparison between episodic and momentary UX is made. It is demonstrated that the method for measuring momentary UX is specific for needs and situations, and possible use cases are discussed.
Development of a questionnaire for identifying driver's personal values in driving BIBAFull-Text 202-208
  Qonita Shahab; Jacques Terken; Berry Eggen
The speed behavior of drivers is influenced by their personal driving values. It is assumed that these personal values may differ between drivers. In this paper, we describe the development of the Personal Driving Values (PDV) questionnaire. The questionnaire is to be used as a means of identifying personal values of drivers underlying their speed behavior. The development of the questionnaire items was inspired by other driving questionnaires, but the aim is to extract factors that represent the personally relevant values in driving. A questionnaire consisting of 49 items was distributed to 250 drivers. An exploratory factor analysis resulted in a final 25-item questionnaire addressing six different driving values: Sustainable Driving, Driving Fun, Driving Relaxed, Safe Driving, Driving Efficiency (Time) and Avoiding Fines.
Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving BIBAFull-Text 210-217
  Tove Helldin; Göran Falkman; Maria Riveiro; Staffan Davidsson
To investigate the impact of visualizing car uncertainty on drivers' trust during an automated driving scenario, a simulator study was conducted. A between-group design experiment with 59 Swedish drivers was carried out where a continuous representation of the uncertainty of the car's ability to autonomously drive during snow conditions was displayed to one of the groups, whereas omitted for the control group. The results show that, on average, the group of drivers who were provided with the uncertainty representation took control of the car faster when needed, while they were, at the same time, the ones who spent more time looking at other things than on the road ahead. Thus, drivers provided with the uncertainty information could, to a higher degree, perform tasks other than driving without compromising with driving safety. The analysis of trust shows that the participants who were provided with the uncertainty information trusted the automated system less than those who did not receive such information, which indicates a more proper trust calibration than in the control group.

Posters

Computerized experience sampling in the car: issues and challenges BIBAFull-Text 220-223
  Alexander Meschtscherjakov; Sandra Trösterer; Christine Döttlinger; David Wilfinger; Manfred Tscheligi
User Experience (UX) studies in the automotive context are essential for a good interaction design in the car. But there is a lack of appropriate methods to investigate the user's experience in-situ in a natural setting. In this paper we introduce computerized experience sampling in the car. We present an ESM tool based on the OBD-II interface of a car in combination with an android smartphone and describe issues and challenges we experienced while applying our tool in an initial UX study.
Exploring head-up augmented reality interfaces for crash warning systems BIBAFull-Text 224-227
  Hyungil Kim; Xuefang Wu; Joseph L. Gabbard; Nicholas F. Polys
Crash warning systems are designed to help avoid vehicle accidents by notifying drivers of potential hazards. In typical crash warning systems, primary warning information is provided through visual, audible and/or haptic cues. In general, the use of crash warning systems results in safer driving. However, driver vehicle interfaces that employ visual warning elements, such as text messages appearing on the center console display and blindspot detection icons in side view mirrors, may take drivers' eyes off the road momentarily, and may lead to divided attention, distracted driving and increased crash risks. To address this, we propose an augmented reality (AR) head-up display interface for crash warning systems that displays visual cues on the drivers' view of the road, with the ultimate goal of increasing driver awareness and safety. In this paper, we describe a simulator-based comparative user study to begin understanding the effect of AR interface design features on driver performance, mental workload, and preferences. Our results support the hypothesis that head-up AR display interfaces for crash warning systems have potential safety benefits and a high likelihood of driver acceptance.
Using tap sequences to authenticate drivers BIBAFull-Text 228-231
  Andrew L. Kun; Travis Royer; Adam Leone
Most vehicles only require a key to authenticate the driver. However, with vehicles becoming portals to digital information, many drivers might find this authentication method inadequate. In this paper we explore using tap sequences on the back of the steering wheel to authenticate drivers. Our results indicate that drivers can learn to use an authentication system that uses such taps, and that the system could provide good protection from shoulder-surfing attacks.
Exploring comfortable and acceptable text sizes for in-vehicle displays BIBAFull-Text 232-236
  Derek Viita; Alexander Muir
Text size and intra-character density are important factors affecting usability and safety of in-vehicle digital displays. This research sought to determine the minimum text size that users find comfortable and acceptable for use in a car, and compare two languages utilizing character sets with very different intra-character densities. Self-reports of minimum comfortable English text sizes were found to be compatible with the minimum criteria outlined in previous research. Minimum comfortable traditional Chinese text sizes were found to be slightly larger. Implications for future research and driver distraction guidelines are discussed.
Towards augmented reality navigation using affordable technology BIBAFull-Text 238-241
  Oskar Palinko; Andrew L. Kun; Zachary Cook; Adam Downey; Aaron Lecomte; Meredith Swanson; Tina Tomaszewski
Augmented reality (AR) navigation systems are likely to improve the driving experience compared to today's personal navigation devices on the dashboard, as they don't require glances away from the road ahead. As technology is not yet capable to deliver an affordable and seamless HUD AR solution, we explore an inexpensive version of augmentation, which would have a similar benefit of reduced distraction. We propose using an LED (light emitting diode) matrix in the periphery of the driver's vision to indicate turns on the road. We find that such a system produces better results in visual attention, driving performance and in subjective measures compared to standard navigation devices.
Estimating cognitive load using pupil diameter during a spoken dialogue task BIBAFull-Text 242-245
  Peter A. Heeman; Tomer Meshorer; Andrew L. Kun; Oskar Palinko; Zeljko Medenica
We explore the feasibility of using pupil diameter to estimate how the cognitive load of the driver changes during a spoken dialogue task with a remote conversant. The conversants play a series of Taboo games, which do not follow a structured turn-taking nor initiative protocol. We contrast the driver's pupil diameter when the remote conversant begins speaking with the diameter right before the driver responds. Although we find a significant difference in pupil diameter for the first pair in each game, subsequent pairs show little difference. We speculate that this is due to the less structured nature of the task, where there are no set time boundaries on when the conversants work on the task. This suggests that spoken dialogue systems for in-car use might better manage the driver's cognitive load by using a more structured interaction, such as system-initiative dialogues.
Unwinding after work: an in-car mood induction system for semi-autonomous driving BIBAFull-Text 246-249
  Zoë Terken; Roy Haex; Luuk Beursgens; Elvira Arslanova; Maria Vrachni; Jacques Terken; Dalila Szostak
We present a concept for an in-car system to support unwinding after work. It consists of a mood sensing steering wheel, an interactive in-car environment and a tangible input device. The in-car environment incorporates a basic state that uses color to relax or energize the driver, and an exploratory state that intends to immerse the user into a simulated environment. In the exploratory state, the user plays with a tangible input device allowing the simulated environment to appear. This environment includes images and sounds related to a certain theme. Our preliminary research findings reveal that users felt significantly calmer and marginally significantly better after interacting with the simulated environment. Results from the semi-structured interviews demonstrated that the majority of people appreciated the system and thought it might be effective to support unwinding. These outcomes demonstrate potential in the concept, but testing in a more realistic setting is necessary.
Mostly passive information delivery in a car BIBAFull-Text 250-253
  Tomáš Macek; Tereza Kašparová; Jan Kleindienst; Ladislav Kunc; Martin Labský; Jan Vystrcil
In this study we present and analyze a mostly passive infotainment approach to presenting information in a car. The passive style is similar to radio listening but content is generated on the fly and it is based on a mixture of personal information (calendar, emails) and public data (news, POI, jokes). The spoken part of the audio is machine synthesized. We explore two modes of operation. The first one is passive only. The second one is more interactive and speech commands are used to personalize the information mix and to request particular information items. Usability and distraction tests were conducted with both systems implemented using the Wizard of Oz technique. Both systems were assessed using multiple objective and subjective metrics and the results indicate that driver distraction was low for both systems. The users differed in the amount of interaction they preferred. Some users preferred more command-driven styles while others were happy with passive presentation. Most of the users were satisfied with the quality of synthesized speech and found it sufficient for the given purpose. In addition, feedback was collected from the subjects on what kind of information they liked listening to and how they would have preferred to ask for specific types of information.
Driver diaries: a multimodal mobility behaviour logging methodology BIBAFull-Text 254-257
  Martin Kracheel; Roderick McCall; Vincent Koenig; Thomas Engel
The Driver Diaries are a mobility behaviour logging methodology, consisting of an online survey, a mobile application and focus group interviews. They are used to collect data about mobility behaviour, routines and motivations of commuters in Luxembourg. The paper focuses on design and development of the Driver Diaries and it explores the use of the application as a requirements capture and an integral element of an infotainment application that can change the standard routine driving behaviour of mobility participants in order to reduce traffic congestion.
Haptic in-seat feedback for lane departure warning BIBAFull-Text 258-261
  David E., Jr. Dass; Alex Uyttendaele; Jacques Terken
A Lane Departure Warning (LDW) system for trucks was developed and evaluated in an iterative design process. As the auditory warning signals used by the majority of LDW systems are disliked by drivers, most effort was put in the design of haptic warning signals. The iterative design process resulted in two different haptic warning signals displayed through vibration motors in the seat: a "blinking" signal for Medium Criticality departures and a continuously vibrating signal for High Criticality departures. In addition, the iterative design process also resulted in small modifications in the auditory warning signals. The effectiveness and user acceptance of the haptic and auditory warning signals were evaluated in an experiment with a driving simulator with 20 participants and in a road test with a truck with 5 participants. It was found that the haptic warning signals were as effective as the auditory warning signals in dealing with lane departures, both in normal driving situations and in a driving + secondary task situation. In addition, the participants clearly preferred the haptic signals over the auditory warning signals.
Gameful design in the automotive domain: review, outlook and challenges BIBAFull-Text 262-265
  Stefan Diewald; Andreas Möller; Luis Roalter; Tobias Stockinger; Matthias Kranz
In this paper, we review the use of gameful design in the automotive domain. Outside of vehicles the automotive industry is mainly using gameful design for marketing and brand forming. For in-vehicle applications and for applications directly connected to real vehicles, the main usage scenarios of gameful design are navigation, eco-driving and driving safety. The objective of this review is to answer the following questions: (1) What elements of gameful design are currently used in the automotive industry? (2) What other automotive applications could be realized or enhanced by applying gameful design? (3) What are the challenges and limitations of gameful design in this domain especially for in-vehicle applications? The review concludes that the use of gameful design for in-vehicle applications seems to be promising. However, gamified applications related to the serious task of driving require thought-out rules and extensive testing in order to achieve the desired goal.
Assessing in-vehicle information systems application in the car: a versatile tool and unified testing platform BIBAFull-Text 266-269
  Nicolas Louveton; Roderick McCall; Tigran Avanesov; Vincent Koenig; Thomas Engel
In this paper we present the DriveLab IVIS testing platform which allows for the same experiments to be conducted both under simulator and real car conditions. Other key aspects of DriveLab is that it is highly modular (therefore allowing the exchange or integration of different components) and that it supports more than one driver. For example we show that the same IVIS devices and scenario can be used with two different 3D engines. The paper provides a technical overview and a brief example of use.
Collision risk prediction and warning at road intersections using an object oriented Bayesian network BIBAFull-Text 270-277
  Galia Weidl; Gabi Breuel; Virat Singhal
This paper describes a novel approach to situation analysis at intersections using object-oriented Bayesian networks. The Bayesian network infers the collision probability for all vehicles approaching the intersection, while taking into account traffic rules, the digital street map, and the sensors' uncertainties. The environment perception is fused from communicated data, vehicles local perception and self-localization. Thus, a cooperatively validated set of data is obtained to characterize all objects involved in a situation (resolving occlusions). The system is tested with data, acquired by vehicles with heterogenic equipment (without/with perception).
   In a first step the probabilistic mapping of a vehicle onto a fixed set of traffic lanes and forward motion predictions is introduced. Second, criticality measures are evaluated for these motion predictions to infer the collision probability.
   In our test vehicle this probability is then used to warn the driver of a possible hazardous situation. It serves as a likelihood alarm parameter for deciding the intensity of HMI acoustic signals to direct the driver's attention. First results in various simulated and live real-time scenarios show, that a collision can be predicted up to two seconds before a possible impact by applying the developed Bayesian network. The extension of this network to further situation features is the content of ongoing research.
Sustainability, transport and design: reviewing the prospects for safely encouraging eco-driving BIBAFull-Text 278-284
  Rich C. McIlroy; Neville A. Stanton; Catherine Harvey; Duncan Robertson
Private vehicle use contributes a disproportionately large amount to the degradation of the environment we inhabit. Technological advancement is of course critical to the mitigation of climate change, however alone it will not suffice; we must also see behavioural change. This paper will argue for the application of Ergonomics to the design of private vehicles, particularly low-carbon vehicles (e.g. hybrid and electric), to encourage this behavioural change. A brief review of literature is offered concerning the effect of the design of a technological object on behaviour, the inter-related nature of goals and feedback in guiding performance, the effect on fuel economy of different driving styles, and the various challenges brought by hybrid and electric vehicles, including range anxiety, workload and distraction, complexity, and novelty. This is followed by a discussion on the potential applicability of a particular design framework, namely Ecological Interface Design, to the design of in-vehicle interfaces that encourage energy-conserving driving behaviours whilst minimising distraction and workload, thus ensuring safety.
Anticipatory driving competence: motivation, definition & modeling BIBAFull-Text 286-291
  Patrick Stahl; Birsen Donmez; Greg A. Jamieson
Anticipation of future events is recognized to be a significant element of driver competence. Surely, guiding one's behavior through the anticipation of future traffic states provides potential gains in recognition and reaction times. However, the role of anticipation in driving and ways to support it have not been systematically studied. In this paper, we identify the characteristics of anticipatory driving and provide a working definition. In particular, we distinguish it from overall driving goals such as eco or defensive driving, but rather present it as a high-level competence for efficient positioning of the vehicle to ultimately facilitate these goals. We also argue that anticipation occurs within the context of stereotypical scenarios and provide an initial taxonomy for the identification of such scenarios. We suggest the Decision Ladder as a useful way of modeling anticipatory driving and finally discuss a potential approach for the facilitation of anticipatory driving through skill- and rule-based behavior, which can allow for shortcuts on the Decision Ladder.
Graphic toolkit for adaptive layouts in in-vehicle user interfaces BIBAFull-Text 292-298
  Renate Häuslschmid; Klaus Bengler; Cristina Olaverri-Monreal
Currently, the processes used by many car manufacturers to adapt information from the head unit to the intended display platform are outdated and extremely cumbersome. The individual graphic elements are not designed for use in different contexts and many steps must be done manually to achieve a proper in-vehicle information visualization. Additionally, the amount of data that must be maintained is extremely large, resulting in strong restrictions in the variability of appearance and displayed information. An additional challenge is that multiple car brands belong to the same main company with each brand having a separate identity. Therefore, the graphical user interface (GUI) elements require resizing and recomposition which then reflects the brand's heterogeneous characteristics. To overcome these drawbacks we present a software solution to create and edit flexible, in-vehicle GUIs through reusable elements or widgets that adjust their size and composition to their environmental context in a dynamic and automatic manner. We have examined the quality of the tool through validation rules for each step and proposed calculation algorithms as a possible approach for a largely automated evaluation.
A left-turn driving aid using projected oncoming vehicle paths with augmented reality BIBAFull-Text 300-307
  Cuong Tran; Karlin Bark; Victor Ng-Thow-Hing
Making left turns across oncoming traffic without a protected left-turn signal is a significant safety concern at intersections. In a left turn situation, the driver typically does not have the right of way and must determine when to initiate the turn maneuver safely. It has been reported that a driver's inability to correctly judge the velocity and time gap of the oncoming vehicles is a major cause of left turn crashes. Although the position and velocity of surrounding vehicles is available using camera and laser based vehicle detection and tracking, methods on how to effectively communicate such information to help the driver have been relatively under-explored. In this paper, we describe a left turn aid that displays a 3 second projected path of the oncoming vehicle in the driver's environment with a 3D Head-Up Display (3D-HUD). Utilizing the abilities of our 3D-HUD to show the projected path in Augmented Reality (AR) could help increase driver intuition and alleviate visual distraction as compared to other possible non-AR solutions. Through an iterative process utilizing early user feedback, the design of the left turn aid was refined to interfere less with the driver view and be more effective. A pilot study has been designed for a driving simulation environment and can be used to evaluate the potential of the proposed AR left turn aid in helping the driver be more cautious or efficient when turning left.