HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,243,277
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Schneegass_S* Results: 32 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 32 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 |
SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull Authentication and Privacy / Schneegass, Stefan / Oualil, Youssef / Bulling, Andreas Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.1379-1384
ACM Digital Library Link
Summary: Secure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user's skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user's skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable -- even when taking off and putting on the device multiple times -- and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

Investigating User Needs for Bio-sensing and Affective Wearables Late-Breaking Works: Designing Interactive Systems / Hassib, Mariam / Khamis, Mohamed / Schneegass, Stefan / Shirazi, Ali Sahami / Alt, Florian Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.1415-1422
ACM Digital Library Link
Summary: Bio-sensing wearables are currently advancing to provide users with a lot of information about their physiological and affective states. However, relatively little is known about users' interest in acquiring, sharing and receiving this information and through which channels and modalities. To close this gap, we report on the results of an online survey (N=109) exploring principle aspects of the design space of wearables such as data types, contexts, feedback modalities and sharing behaviors. Results show that users are interested in obtaining physiological, emotional and cognitive data through modalities beyond traditional touchscreen output. Valence of the information, whether positive or negative affects the sharing behaviors.

What People Really Remember: Understanding Cognitive Effects When Interacting with Large Displays Session 5: Large Displays / Panhey, Philipp / Döring, Tanja / Schneegass, Stefan / Wenig, Dirk / Alt, Florian Proceedings of the 2015 ACM International Conference on Interactive Tabletops and Surfaces 2015-11-15 p.103-106
ACM Digital Library Link
Summary: This paper investigates how common interaction techniques for large displays impact on recall in learning tasks. Our work is motivated by results of prior research in different areas that attribute a positive effect of interactivity to cognition. We present findings from a controlled lab experiment with 32 participants comparing mobile phone-based interaction, touch interaction and full-body interaction to a non-interactive baseline. In contrast to prior findings, our results reveal that more movement can negatively influence recall. In particular we show that designers are facing an immanent trade-off between designing engaging interaction through extensive movement and creating memorable content.

Self-Actuated Displays for Vertical Surfaces Visualization / Bader, Patrick / Schwind, Valentin / Pohl, Norman / Henze, Niels / Wolf, Katrin / Schneegass, Stefan / Schmidt, Albrecht Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part IV 2015-09-14 v.4 p.282-299
Keywords: Self-actuated; Display; Vertical surface; Mobile
Link to Digital Content at Springer
Summary: Most current devices are passive regarding their locations by being integrated in the environment or require to be carried when used in mobile scenarios. In this paper we present a novel type of self-actuated devices, which can be placed on vertical surfaces like whiteboards or walls. This enables vertical tangible interaction as well as the device interacting with the user through self-actuated movements. In this paper, we explore the application space for such devices by aggregating user-defined application ideas gathered in focus groups. Moreover, we implement and evaluate four interaction scenarios, discuss their usability and identify promising future use cases and improvements.

Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation Tactile Notifications for Phones & Wearables / Pfeiffer, Max / Dünte, Tim / Schneegass, Stefan / Alt, Florian / Rohs, Michael Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.2505-2514
ACM Digital Library Link
Summary: Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.

Modeling Distant Pointing for Compensating Systematic Displacements Interaction Techniques for Tables & Walls / Mayer, Sven / Wolf, Katrin / Schneegass, Stefan / Henze, Niels Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.4165-4168
ACM Digital Library Link
Summary: Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Pointing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic displacements. Therefore, we trained a polynomial to compensate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the average error by 37.3%.

TUIs in the Large: Using Paper Tangibles with Mobile Devices WIP Theme: Mobile Interactions / Wolf, Katrin / Schneegass, Stefan / Henze, Niels / Weber, Dominik / Schwind, Valentin / Knierim, Pascal / Mayer, Sven / Dingler, Tilman / Abdelrahman, Yomna / Kubitza, Thomas / Funk, Markus / Mebus, Anja / Schmidt, Albrecht Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.1579-1584
ACM Digital Library Link
Summary: Tangible user interfaces (TUIs) have been proposed to interact with digital information through physical objects. However being investigated since decades, TUIs still play a marginal role compared to other UI paradigms. This is at least partially because TUIs often involve complex hardware elements, which make prototyping and production in quantities difficult and expensive. In this paper we present our work towards paper TUIs (pTUIs) -- easily makeable interactive TUIs using laser-cut paper, brass fasteners, metal bands, mirror foils, and touch screen devices as platform. Through three examples we highlight the flexibility of the approach. We rebuilt the seminal work URP to show that pTUIs can replicate existing TUIs in DIY manufacturing. We implemented tangible Pong being controlled by paper rackets to show that pTUIs can be used in highly interactive systems. Finally, we manufactured an interactive Christmas card and distributed it to 300 recipients by mail to show that pTUIs can be used as apparatus to explore how pTUIs are used outside the lab in real life.

Evaluating Stereoscopic 3D for Automotive User Interfaces in a Real-World Driving Study WIP Theme: Novel Interfaces and Interaction Techniques / Broy, Nora / Schneegass, Stefan / Guo, Mengbing / Alt, Florian / Schmidt, Albrecht Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.1717-1722
ACM Digital Library Link
Summary: This paper reports on the use of in-car 3D displays in a real-world driving scenario. Today, stereoscopic displays are becoming ubiquitous in many domains such as mobile phones or TVs. Instead of using 3D for entertainment, we explore the 3D effect as a mean to spatially structure user interface (UI) elements. To evaluate potentials and drawbacks of in-car 3D displays we mounted an autostereoscopic display as instrument cluster in a vehicle and conducted a real-world driving study with 15 experts in automotive UI design. The results show that the 3D effect increases the perceived quality of the UI and enhances the presentation of spatial information (e.g., navigation cues) compared to 2D. However, the effect should be used well-considered to avoid spatial clutter which can increase the system's complexity.

Design and evaluation of a layered handheld 3d display with touch-sensitive front and back / Bader, Patrick / Schwind, Valentin / Henze, Niels / Schneegass, Stefan / Broy, Nora / Schmidt, Albrecht Proceedings of the 8th Nordic Conference on Human-Computer Interaction 2014-10-26 p.315-318
ACM Digital Library Link
Summary: Touch screens became truly pervasive through the success of smartphones and tablet PCs. Several approaches to further improve the interaction with touch screens have been proposed. In this paper we combine and extend two of these trends. We present a mobile 3D screen that consists of a stack of displays and is touch sensitive on both display sides. This design makes the screen independent from the user's view angle. Using a touch-sensitive back enables back-of-device interaction to avoid the fat-finger problem. Combining back-of-device interaction with a transparent display also avoids occlusion of the user's finger on the back through the device. Through a study we investigate how back and front touch improves interaction with 3D content and show how back-of-device interaction is improved if the user can actually see the finger on the back.

3D Displays in Cars: Exploring the User Performance for a Stereoscopic Instrument Cluster Podium Presentations: Investigating the impacts of novel user interfaces / Broy, Nora / Alt, Florian / Schneegass, Stefan / Pfleging, Bastian AutomotiveUI 2014: International Conference on Automotive User Interfaces and Interactive Vehicular Applications 2014-09-17 v.1 n.9 pages p.2
ACM Digital Library Link
Summary: In this paper, we investigate user performance for stereoscopic automotive user interfaces (UI). Our work is motivated by the fact that stereoscopic displays are about to find their way into cars. Such a safety-critical application area creates an inherent need to understand how the use of stereoscopic 3D visualizations impacts user performance. We conducted a comprehensive study with 56 participants to investigate the impact of a 3D instrument cluster (IC) on primary and secondary task performance. We investigated different visualizations (2D and 3D) and complexities (low vs. high amount of details) of the IC as well as two 3D display technologies (shutter vs. autostereoscopy). As secondary tasks the participants judged spatial relations between UI elements (expected events) and reacted on pop-up instructions (unexpected events) in the IC. The results show that stereoscopy increases accuracy for expected events, decreases task completion times for unexpected tasks, and increases the attractiveness of the interface. Furthermore, we found a significant influence of the used technology, indicating that secondary task performance improves for shutter displays.

Experience Maps: Experience-Enhanced Routes for Car Navigation Work in Progress / Pfleging, Bastian / Meschtscherjakov, Alexander / Schneegass, Stefan / Tscheligi, Manfred AutomotiveUI 2014: International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Adjunct Proceedings 2014-09-17 v.2 n.6 pages p.41
ACM Digital Library Link
Summary: People spend a considerable time per day driving a car. Navigation technology helps the driver to find a location, to see traffic details, or to estimate the arrival time. Selecting a certain route influences the driving experience (i.e., the user experience while driving) through factors such as traffic density, landscape, or road type. However, current navigation systems mainly optimize routes regarding time, distance, or fuel efficiency -- neglecting important driving experience factors: The fastest route might still be packed with traffic which stresses drivers or negatively influence their mood. In contrast, a slightly slower route could for instance offer a better driving experience with less traffic and scenic views. In this paper, we propose a concept that allows for experience-optimized routing to make driving more joyful and pleasurable. We also present the results of a web survey with 114 participants that we conducted to explore the users' preferences and opinions regarding taking the experience into account for route guidance.

Workshop on smart garments: sensing, actuation, interaction, and applications in garments Workshop on Smart Garments: Sensing, Actuation, Interaction, and Applications in Garments (WOSG) / Schneegass, Stefan / Van Laerhoven, Kristof / Cheng, Jingyuan / Amft, Oliver Adjunct Proceedings of the 2014 International Symposium on Wearable Computers 2014-09-13 v.2 p.225-229
ACM Digital Library Link
Summary: Over the last years different wearable electronic devices, technically similar to smart phones, have become available in the form factor of watches and glasses. However, including wearable sensing, actuation, and communication technologies directly into garments is still a great challenge. Cloths offer the chance to unobtrusively integrate new functionalities. Nevertheless, it is essential to take into account that garments and cloths are fundamentally different from electronic devices. Manufacturing processes for fabrics and cloths, drivers for fashion, and user expectation with regard to comfort and durability are not comparable to classical electronic devices. In smart watches and glasses applications resemble common smart phone functionality (e.g., picture taking, (instant) messaging, voice communication, presentation of reminders) with new input and output channels. In contrast to this, new possibilities for sensing, actuation, and interaction are opening entirely new applications on garments. These new applications are needed to be identified and will then again drive the advances in smart garments. In this workshop, we focus on novel applications for garments. We discuss underlying abstraction layers that allow developers to create applications that are independent from a specific garment and that can be used with different garments. Furthermore, we invite research contributions and position statements on sensing and actuation as the basic mechanisms for smart garments. Overall the workshop aims at improving our understanding of the fundamental challenges when wearable computing moves beyond accessories into garments.

Towards a garment OS: supporting application development for smart garments Workshop on Smart Garments: Sensing, Actuation, Interaction, and Applications in Garments (WOSG) / Schneegass, Stefan / Birmili, Tobias / Hassib, Mariam / Henze, Niels Adjunct Proceedings of the 2014 International Symposium on Wearable Computers 2014-09-13 v.2 p.261-266
ACM Digital Library Link
Summary: Wearable devices and the development of smart garments emerged into a significant research domain over the last decades. Despite the increasing commercial interest, however, smart garments are almost exclusively developed in academia and the developed systems do not exceed a prototypical level. We argue that the main reason why smart garments cannot be produced on commercially relevant scale today is that they each focus on a specific use case. There is no tool support for application developers and no defined APIs within the software and hardware stack that allows developing useful smart garment applications. In this paper we present our work towards Garment OS, a layered software stack that encapsulates different levels of abstraction. We highlight the design of that system which is based on open web protocols. We present an evaluation with software engineers and derive directions for future work.

SmudgeSafe: geometric image transformations for smudge-resistant user authentication Security / Schneegass, Stefan / Steimle, Frank / Bulling, Andreas / Alt, Florian / Schmidt, Albrecht Proceedings of the 2014 International Joint Conference on Pervasive and Ubiquitous Computing 2014-09-13 v.1 p.775-786
ACM Digital Library Link
Summary: Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.

Midair Displays: Concept and First Experiences with Free-Floating Pervasive Displays Papers Session #2 / Schneegass, Stefan / Alt, Florian / Scheible, Jürgen / Schmidt, Albrecht Proceedings of the 2014 ACM International Symposium on Pervasive Displays 2014-06-03 p.27-31
ACM Digital Library Link
Summary: Due to advances in technology, displays could replace literally any surface in the future, including walls, windows, and ceilings. At the same time, midair remains a relatively unexplored domain for the use of displays as of today, particularly in public space. Nevertheless, we see large potential in the ability to make displays appear at any possible point in space, both indoors and outdoors. Such displays, that we call midair displays, could control large crowds in emergency situations, they could be used during sports for navigation and feedback on performance, or as group displays. We see midair displays as a complementary technology to wearable displays. In contrast to statically deployed displays they allow information to be brought to the user anytime and anywhere. We explore the concept of midair displays and show that with current technology, e.g., copter drones, such displays can be easily built. A study on the readability of such displays showcases the potential and feasibility of the concept and provides early insights.

SenScreen: A Toolkit for Supporting Sensor-enabled Multi-Display Networks Papers Session #4 / Schneegass, Stefan / Alt, Florian Proceedings of the 2014 ACM International Symposium on Pervasive Displays 2014-06-03 p.92-97
ACM Digital Library Link
Summary: Over the past years, a number of sensors have emerged, that enable gesture-based interaction with public display applications, including Microsoft Kinect, Asus Xtion, and Leap Motion. In this way, interaction with displays can be made more attractive, particularly if deployed across displays hence involving many users. However, interactive applications are still scarce, which can be attributed to the fact that developers usually need to implement a low-level connection to the sensor. In this work, we tackle this issue by presenting a toolkit, called SenScreen, consisting of (a) easy-to-install adapters that handle the low-level connection to sensors and provides the data via (b) an API that allows developers to write their applications in JavaScript. We evaluate our approach by letting two groups of developers create an interactive game each using our toolkit. Observation, interviews, and questionnaire indicate that our toolkit simplifies the implementation of interactive applications and may, hence, serve as a first step towards a more widespread use of interactive public displays.

Let me catch this!: experiencing interactive 3D cinema through collecting content with a mobile phone Enabling interactive performances / Häkkilä, Jonna R. / Posti, Maaret / Schneegass, Stefan / Alt, Florian / Gultekin, Kunter / Schmidt, Albrecht Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.1011-1020
ACM Digital Library Link
Summary: The entertainment industry is going through a transformation, and technology development is affecting how we can enjoy and interact with the entertainment media content in new ways. In our work, we explore how to enable interaction with content in the context of 3D cinemas. This allows viewers to use their mobile phone to retrieve, for example, information on the artist of the soundtrack currently playing or a discount coupon on the watch the main actor is wearing. We are particularly interested in the user experience of the interactive 3D cinema concept, and how different interactive elements and interaction techniques are perceived. We report on the development of a prototype application utilizing smart phones and on an evaluation in a cinema context with 20 participants. Results emphasize that designing for interactive cinema experiences should drive for holistic and positive user experiences. Interactive content should be tied together with the actual video content, but integrated into contexts where it does not conflict with the immersive experience with the movie.

FrameBox and MirrorBox: tools and guidelines to support designers in prototyping interfaces for 3D displays 3D interaction: modeling and prototyping / Broy, Nora / Schneegass, Stefan / Alt, Florian / Schmidt, Albrecht Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.2037-2046
ACM Digital Library Link
Summary: In this paper, we identify design guidelines for stereoscopic 3D (S3D) user interfaces (UIs) and present the MirrorBox and the FrameBox, two UI prototyping tools for S3D displays. As auto-stereoscopy becomes available for the mass market we believe the design of S3D UIs for devices, for example, mobile phones, public displays, or car dashboards, will rapidly gain importance. A benefit of such UIs is that they can group and structure information in a way that makes them easily perceivable for the user. For example, important information can be shown in front of less important information. This paper identifies core requirements for designing S3D UIs and derives concrete guidelines. The requirements also serve as a basis for two depth layout tools we built with the aim to overcome limitations of traditional prototyping when sketching S3D UIs. We evaluated the tools with usability experts and compared them to traditional paper prototyping.

Exploiting thermal reflection for interactive systems Novel mobile displays and devices / Shirazi, Alireza Sahami / Abdelrahman, Yomna / Henze, Niels / Schneegass, Stefan / Khalilbeigi, Mohammadreza / Schmidt, Albrecht Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.3483-3492
ACM Digital Library Link
Summary: Thermal cameras have recently drawn the attention of HCI researchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and make it easy to separate human bodies from the image background. Far-infrared radiation, however, has another characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal reflection. Common surfaces reflect thermal radiation differently than visual light and can be perfect thermal mirrors. In this paper, we show that through thermal reflection, thermal cameras can sense the space beyond their direct field-of-view. A thermal camera can sense areas besides and even behind its field-of-view through thermal reflection. We investigate how thermal reflection can increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the reflection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of promising application examples that can benefit from the thermal reflection characteristics of surfaces.

NatCut: an interactive tangible editor for physical object fabrication Works-in-progress / Schneegass, Stefan / Shirazi, Alireza Sahami / Döring, Tanja / Schmid, David / Schmidt, Albrecht Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.1441-1446
ACM Digital Library Link
Summary: While physical prototyping and personal fabrication is currently getting increasingly popular, many of the tools used to design 3D objects are still complex and cumbersome to use. In this paper, we address this issue and present a novel tabletop-based tangible editor, called NatCut, that allows the quick and easy design of physical enclosures for interactive prototypes. To generate an enclosure with NatCut, the user first chooses a basic geometric shape for it on the tabletop surface. By simply placing electronic components on the displayed 2D layout for the enclosure, respective cut-outs and holes are generated. Further, a number of user interactions on the tabletop screen are supported to modify, personalize, and enrich the casing. The resulting 2D layout contains all joints needed to assemble the parts after laser cutting. We discuss the results of a user study in which we tested the approach.

Exploring virtual depth for automotive instrument cluster concepts Works-in-progress / Broy, Nora / Zierer, Benedikt J. / Schneegass, Stefan / Alt, Florian Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.1783-1788
ACM Digital Library Link
Summary: This paper compares the user experience of three novel concept designs for 3D-based car dashboards. Our work is motivated by the fact that analogue dashboards are currently being replaced by their digital counterparts. At the same time, auto-stereoscopic displays enter the market, allowing the quality of novel dashboards to be increased, both with regard to the perceived quality and in supporting the driving task. Since no guidelines or principles exist for the design of digital 3D dashboards, we take an initial step in designing and evaluating such interfaces. In a study with 12 participants we were able to show that stereoscopic 3D increases the perceived quality of the display while motion parallax leads to a rather disturbing experience.

Midair displays: exploring the concept of free-floating public displays Works-in-progress / Schneegass, Stefan / Alt, Florian / Scheible, Jürgen / Schmidt, Albrecht / Su, Haifeng Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.2035-2040
ACM Digital Library Link
Summary: Due to advances in technology, displays could replace literally any surface in the future, including walls, windows, and ceilings. At the same time, midair remains a relatively unexplored domain for the use of displays as of today, particularly in public spaces. Nevertheless, we see large potential in the ability to make displays appear at any possible point in space, both indoors and outdoors. Such displays, that we call midair displays, could control large crowds in emergency situations, they could be used during sports for navigation and feedback on performance, or they could be used as group displays which enable information to be brought to the user anytime and anywhere. We explore the concept of midair displays and show that with current technology, for example copter drones, such displays can be easily built.

Let me grab this: a comparison of EMS and vibration for haptic feedback in free-hand interaction 1. Touch / Pfeiffer, Max / Schneegass, Stefan / Alt, Florian / Rohs, Michael Proceedings of the 2014 Augmented Human International Conference 2014-03-07 p.46
ACM Digital Library Link
Summary: Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realistic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically investigate the design space for haptic feedback. Second, we experimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to different virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.

Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays Adaptive user interfaces / Alt, Florian / Schneegass, Stefan / Auda, Jonas / Rzayev, Rufat / Broy, Nora Proceedings of the 2014 International Conference on Intelligent User Interfaces 2014-02-24 v.1 p.267-272
ACM Digital Library Link
Summary: In this paper, we investigate the concept of gaze-based interaction with 3D user interfaces. We currently see stereo vision displays becoming ubiquitous, particularly as auto-stereoscopy enables the perception of 3D content without the use of glasses. As a result, application areas for 3D beyond entertainment in cinema or at home emerge, including work settings, mobile phones, public displays, and cars. At the same time, eye tracking is hitting the consumer market with low-cost devices. We envision eye trackers in the future to be integrated with consumer devices (laptops, mobile phones, displays), hence allowing the user's gaze to be analyzed and used as input for interactive applications. A particular challenge when applying this concept to 3D displays is that current eye trackers provide the gaze point in 2D only (x and y coordinates). In this paper, we compare the performance of two methods that use the eye's physiology for calculating the gaze point in 3D space, hence enabling gaze-based interaction with stereoscopic content. Furthermore, we provide a comparison of gaze interaction in 2D and 3D with regard to user experience and performance. Our results show that with current technology, eye tracking on stereoscopic displays is possible with similar performance as on standard 2D screens.

Exploring user expectations for context and road video sharing while calling and driving Texting and calling / Pfleging, Bastian / Schneegass, Stefan / Schmidt, Albrecht AutomnotiveUI 2013: International Conference on Automotive User Interfaces and Interactive Vehicular Applications 2013-10-28 p.132-139
ACM Digital Library Link
Summary: Calling while driving a car has become very common since the rise of mobile phones. Drivers use their phone despite the fact that calling in the car is potentially distracting and dangerous. Prohibiting communication while driving is not a good idea as there are also positive effects of calling (e.g., ability to notify about a delay, staying awake, preventing fatigue, guidance at foreign places).
    In contrast to passengers in the car, remote phone callers do not know any context details about the driver besides transmitted background noise. Using driving-related context information and live images allows to create situation awareness for the caller outside of the car and share a passenger-like view of car, road, and traffic conditions. In this paper, we explore drivers' and callers' expectations and reservations towards context and video sharing before and during phone calls. First, we explored which data can be shared between callers and drivers. Based on a web survey conducted with 123 participants, we evaluate the callers' and drivers' attitudes towards sharing of such information. We then conducted separate interviews with various drivers to get deeper insights about their attitudes towards sharing context information while driving and their expectations towards systems that provide such features. We found that automatic context and video sharing is less preferred than situation-based sharing. If drivers like the idea of video sharing, they also assume that it would have a positive influence on driving.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 32 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 |