HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,646,433
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Steinicke_F* Results: 23 Sorted by: Date  Comments?
Help Dates
Limit:   
[1] HoverSpace Interactive Tabletops / Lubos, Paul / Ariza, Oscar / Bruder, Gerd / Daiber, Florian / Steinicke, Frank / Krüger, Antonio Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part III 2015-09-14 v.3 p.259-277
Keywords: Hover space; Touch interaction; Stereoscopic displays; 3D interaction
Link to Digital Content at Springer
Summary: Recent developments in the area of stereoscopic displays and tracking technologies have paved the way to combine touch interaction on interactive surfaces with spatial interaction above the surface of a stereoscopic display. This holistic design space supports novel affordances and user experiences during touch interaction, but also induce challenges to the interaction design. In this paper we introduce the concept of hover interaction for such setups. Therefore, we analyze the non-visual volume above a virtual object, which is perceived as the corresponding hover space for that object. The results show that the users' perceptions of hover spaces can be categorized into two groups. Either users assume that the shape of the hover space is extruded and scaled towards their head, or along the normal vector of the interactive surface. We provide a corresponding model to determine the shapes of these hover spaces, and confirm the findings in a practical application. Finally, we discuss important implications for the development of future touch-sensitive interfaces.

[2] Threefolded motion perception during immersive walkthroughs Perception / Bruder, Gerd / Steinicke, Frank Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology 2014-11-11 p.177-185
ACM Digital Library Link
Summary: Locomotion is one of the most fundamental processes in the real world, and its consideration in immersive virtual environments (IVEs) is of major importance for many application domains requiring immersive walkthroughs. From a simple physics perspective, such self-motion can be defined by the three components speed, distance, and time. Determining motions in the frame of reference of a human observer imposes a significant challenge to the perceptual processes in the human brain, and the resulting speed, distance, and time percepts are not always veridical. In previous work in the area of IVEs, these components were evaluated in separate experiments, i. e., using largely different hardware, software and protocols.
    In this paper we analyze the perception of the three components of locomotion during immersive walkthroughs using the same setup and similar protocols. We conducted experiments in an Oculus Rift head-mounted display (HMD) environment which showed that subjects largely underestimated virtual distances, slightly underestimated virtual speed, and we observed that subjects slightly overestimated elapsed time.

[3] A self-experimentation report about long-term use of fully-immersive technology Seeing, walking and being in spatial VEs / Steinicke, Frank / Bruder, Gerd Proceedings of the 2014 ACM Symposium Spatial User Interaction 2014-10-04 p.66-69
ACM Digital Library Link
Summary: Virtual and digital worlds have become an essential part of our daily life, and many activities that we used to perform in the real world such as communication, e-commerce, or games, have been transferred to the virtual world nowadays. This transition has been addressed many times by science fiction literature and cinematographic works, which often show dystopic visions in which humans live their lives in a virtual reality (VR)-based setup, while they are immersed into a virtual or remote location by means of avatars or surrogates. In order to gain a better understanding of how living in such a virtual environment (VE) would impact human beings, we conducted a self-experiment in which we exposed a single participant in an immersive VR setup for 24 hours (divided into repeated sessions of two hours VR exposure followed by ten minutes breaks), which is to our knowledge the longest documented use of an immersive VEs so far. We measured different metrics to analyze how human perception, behavior, cognition, and motor system change over time in a fully isolated virtual world.

[4] Are 4 hands better than 2?: bimanual interaction for quadmanual user interfaces Spatial pointing and touching / Lubos, Paul / Bruder, Gerd / Steinicke, Frank Proceedings of the 2014 ACM Symposium Spatial User Interaction 2014-10-04 p.123-126
ACM Digital Library Link
Summary: The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.

[5] Safe-&-round: bringing redirected walking to small virtual reality laboratories Poster session / Lubos, Paul / Bruder, Gerd / Steinicke, Frank Proceedings of the 2014 ACM Symposium Spatial User Interaction 2014-10-04 p.154
ACM Digital Library Link
Summary: Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.

[6] Interactive surfaces for interaction with stereoscopic 3d (ISIS3D): tutorial and workshop at its 2013 Workshops and tutorials / Daiber, Florian / De Araujo, Bruno Rodrigues / Steinicke, Frank / Stuerzlinger, Wolfgang Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces 2013-10-06 p.483-486
ACM Digital Library Link
Summary: With the increasing distribution of multi-touch capable devices multi-touch interaction becomes more and more ubiquitous. Multi-touch interaction offers new ways to deal with 3D data allowing a high degree of freedom (DOF) without instrumenting the user. Due to the advances in 3D technologies, designing for 3D interaction is now more relevant than ever. With more powerful engines and high resolution screens also mobile devices can run advanced 3D graphics, 3D UIs are emerging beyond the game industry, and recently, first prototypes as well as commercial systems bringing (auto-) stereoscopic display on touch-sensitive surfaces have been proposed. With the Tutorial and Workshop on "Interactive Surfaces for Interaction with Stereoscopic 3D (ISIS3D)" we aim to provide an interactive forum that focuses on the challenges that appear when the flat digital world of surface computing meets the curved, physical, 3D space we live in.

[7] To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces Full papers / Bruder, Gerd / Steinicke, Frank / Sturzlinger, Wolfgang Proceedings of the 2013 ACM Symposium Spatial User Interaction 2013-07-20 p.9-16
ACM Digital Library Link
Summary: Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.

[8] SmurVEbox: a smart multi-user real-time virtual environment for generating character animations Sharing live user experience: how new mixed reality technologies and networks support real-time interactions / Beimler, Rüdiger / Bruder, Gerd / Steinicke, Frank Proceedings of the 2013 Virtual Reality International Conference 2013-03-20 p.1
ACM Digital Library Link
Summary: Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application.

[9] Touching the Void Revisited: Analyses of Touch Behavior on and above Tabletop Surfaces Creating Effective 3D Displays / Bruder, Gerd / Steinicke, Frank / Stuerzlinger, Wolfgang Proceedings of IFIP INTERACT'13: Human-Computer Interaction-1 2013 v.1 p.278-296
Keywords: Touch-sensitive systems; stereoscopic displays; 3D interaction
Link to Digital Content at Springer
Summary: Recent developments in touch and display technologies made it possible to integrate touch-sensitive surfaces into stereoscopic three-dimensional (3D) displays. Although this combination provides a compelling user experience, interaction with stereoscopically displayed objects poses some fundamental challenges. If a user aims to select a 3D object, each eye sees a different perspective of the same scene. This results in two distinct projections on the display surface, which raises the question where users would touch in 3D or on the two-dimensional (2D) surface to indicate the selection. In this paper we analyze the relation between the 3D positions of stereoscopically displayed objects and the on- as well as off-surface touch areas. The results show that 2D touch interaction works better close to the screen but also that 3D interaction is more suitable beyond 10cm from the screen. Finally, we discuss implications for the development of future touch-sensitive interfaces with stereoscopic display.

[10] Blending Real and Virtual Worlds Using Self-reflection and Fiducials Demonstrations / Fischbach, Martin / Wiebusch, Dennis / Latoschik, Marc Erich / Bruder, Gerd / Steinicke, Frank Proceedings of the 2012 International Conference on Entertainment Computing 2012-09-26 p.465-468
Keywords: Mixed Reality; Self-Reflection; Fiducials; Fish Tank Virtual Reality; Interactive Virtual Art; Multi-Touch
Link to Digital Content at Springer
Summary: This paper presents an enhanced version of a portable out-of-the-box platform for semi-immersive interactive applications. The enhanced version combines stereoscopic visualization, marker-less user tracking, and multi-touch with self-reflection of users and tangible object interaction. A virtual fish tank simulation demonstrates how real and virtual worlds are seamlessly blended by providing a multi-modal interaction experience that utilizes a user-centric projection, body, and object tracking, as well as a consistent integration of physical and virtual properties like appearance and causality into a mixed real/virtual world.

[11] The 3rd dimension of CHI (3DCHI): touching and designing 3D user interfaces Workshop summaries / Steinicke, Frank / Benko, Hrvoje / Krüger, Antonio / Keefe, Daniel / de la Riviére, Jean-Baptiste / Anderson, Ken / Häkkilä, Jonna / Arhippainen, Leena / Pakanen, Minna Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing Systems 2012-05-05 v.2 p.2695-2698
ACM Digital Library Citation
Summary: In recent years 3D has gained increasing amount of attention -- interactive visualization of 3D data has become increasingly important and widespread due to the requirements of several application areas, and entertainment industry has brought 3D experience to the reach of wide audiences through games, 3D movies and stereoscopic displays. However, current user interfaces (UIs) often lack adequate support for 3D interactions: 2D metaphors still dominate in GUI design, 2D desktop systems are often limited in cases where natural interaction with 3D content is required, and sophisticated 3D user interfaces consisting of stereoscopic projections and tracked input devices are rarely adopted by ordinary users. In the future, novel interaction design solutions are needed to better support the natural interaction and utilize the special features of 3D technologies.
    In this workshop we address the research and industrial challenges involved in exploring the space where the flat digital world of surface computing meets the physical, spatially complex, 3D space in which we live. The workshop will provide a common forum for researchers to share their visions of the future and recent results in the area of improving 3D interaction and UI design.

[12] smARTbox: out-of-the-box technologies for interactive art and exhibition Interactive technologies dedicated to art creation / Fischbach, Martin / Latoschik, Marc E. / Bruder, Gerd / Steinicke, Frank Proceedings of the 2012 Virtual Reality International Conference 2012-03-28 p.19
ACM Digital Library Link
Summary: Recent developments in the fields of interactive display technologies provide new possibilities for engaging visitors in interactive three-dimensional virtual art exhibitions. Tracking and interaction technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive art setups while providing portable solutions for engaging presentations and exhibitions. In this paper we describe the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive art setups. Based on the described technologies, we sketch an interactive semi-immersive virtual fish tank implementation that enables direct and indirect interaction with visitors.

[13] 2d touching of 3d stereoscopic objects 3D interaction / Valkov, Dimitar / Steinicke, Frank / Bruder, Gerd / Hinrichs, Klaus Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.1 p.1353-1362
ACM Digital Library Link
Summary: Recent developments in the area of touch and display technologies have suggested to combine multi-touch systems and stereoscopic visualization. Stereoscopic perception requires each eye to see a slightly different perspective of the same scene, which results in two distinct projections on the display. Thus, if the user wants to select a 3D stereoscopic object in such a setup, the question arises where she would touch the 2D surface to indicate the selection. A user may apply different strategies, for instance touching the midpoint between the two projections, or touching one of them.
    In this paper we analyze the relation between the 3D positions of stereoscopically rendered objects and the on-surface touch points, where users touch the surface. We performed an experiment in which we determined the positions of the users' touches for objects, which were displayed with positive, negative or zero parallaxes. We found that users tend to touch between the projections for the two eyes with an offset towards the projection for the dominant eye. Our results give implications for the development of future touch-enabled interfaces, which support 3D stereoscopic visualization.

[14] Touching the 3rd dimension (T3D) SIG / Steinicke, Frank / Benko, Hrvoje / Daiber, Florian / Keefe, Daniel / de la Rivière, Jean-Baptiste Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.2 p.161-164
ACM Digital Library Link
Summary: In recent years interactive visualization of 3D data has become increasingly important and widespread due to the requirements of several application areas. However, current user interfaces often lack adequate support for 3D interactions: 2D desktop systems are often limited in cases where natural interaction with 3D content is required, and sophisticated 3D user interfaces consisting of stereoscopic projections and tracked input devices are rarely adopted by ordinary users. Touch interaction has received considerable attention for 2D interfaces, and more recently for 3D interfaces. Many touch devices now support multiple degrees of freedom input by capturing multiple 2D contact positions on the surface as well as varying levels of pressure and even depth. There is, therefore, great potential for multi-touch interfaces to provide the traditionally difficult to achieve combination of natural 3D interaction without any instrumentation. When combined with a stereoscopic display of 3D data as well as 3D depth cameras, we believe that multi-touch technology can form the basis for a next generation of intuitive and expressive 3D user interfaces. Several research groups have begun to explore the potential, limitations, and challenges of this and other 3D touch environments, and first commercial systems are already available. The goal of the SIG "Touching the 3rd Dimension (T3D)" is to address the research and industrial challenges involved in exploring the space where the flat digital world of surface computing meets the physical, spatially complex, 3D space in which we live. The meeting will provide a common forum to attract groups of conference attendees who share their visions of the future and recent results in the area of improving 3D interaction and visualization by taking advantage of the strengths of advanced multi-touch computing.

[15] Augmentation techniques for efficient exploration in head-mounted display environments Augmented reality / Bolte, Benjamin / Bruder, Gerd / Steinicke, Frank / Hinrichs, Klaus / Lappe, Markus Proceedings of the 2010 ACM Symposium on Virtual Reality Software and Technology 2010-11-22 p.11-18
ACM Digital Library Link
Summary: Physical characteristics and constraints of today's head-mounted displays (HMDs) often impair interaction in immersive virtual environments (VEs). For instance, due to the limited field of view (FOV) subtended by the display units in front of the user's eyes more effort is required to explore a VE by head rotations than for exploration in the real world.
    In this paper we propose a combination of two augmentation techniques that have the potential to make exploration of VEs more efficient: (1) augmenting the geometric FOV (GFOV) used for rendering the VE, and (2) amplifying head rotations while the user changes her head orientation. In order to identify how much manipulation can be applied without users noticing, we conducted two psychophysical experiments in which we analyzed subjects' ability to discriminate between virtual and real head pitch and roll rotations while three different geometric FOVs were used. Our results show that the combination of both techniques has great potential to support efficient exploration of VEs. We found that virtual pitch and roll rotations can be amplified by 30% and 44% respectively, when the GFOV matches the subject's estimation of the most natural FOV. This leads to a possible reduction of the user's effort required to explore the VE using a combination of both techniques by approximately 25%.

[16] Judgment of natural perspective projections in head-mounted display environments Performance analysis / Steinicke, Frank / Bruder, Gerd / Hinrichs, Klaus / Kuhl, Scott / Lappe, Markus / Willemsen, Pete Proceedings of the 2009 ACM Symposium on Virtual Reality Software and Technology 2009-11-18 p.35-42
Keywords: field of view, head-mounted displays, virtual reality
ACM Digital Library Link
Summary: The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.
    In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects -- even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD -- in some cases up to 50%.

[17] Bimanual Interaction with Interscopic Multi-Touch Surfaces Multimodal Interfaces 2 / Schöning, Johannes / Steinicke, Frank / Krüger, Antonio / Hinrichs, Klaus / Valkov, Dimitar Proceedings of IFIP INTERACT'09: Human-Computer Interaction 2009-08-24 v.2 p.40-53
Keywords: Multi-touch Interaction; Interscopic Interaction; 3D User Interfaces
Link to Digital Content at Springer
Summary: Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.

[18] Scene-Motion Thresholds Correlate with Angular Head Motions for Immersive Virtual Environments USER / Jerald, Jason / Steinicke, Frank / Whitton, Mary Proceedings of the 2009 International Conference on Advances in Computer-Human Interactions 2009-02-01 p.69-74
Keywords: scene motion, virtual environments
doi.ieeecomputersociety.org/10.1109/ACHI.2009.40
Summary: To better understand motion perception in immersive virtual environments, we conducted a user study to quantify perception of scene motion as subjects yawed their heads. We measured psychometric functions of scene-velocity thresholds for different head motions and then extracted 75\% thresholds, creating scene-velocity thresholds as functions of three measures of head motion: 1) Angular Range, 2) Peak Angular Velocity, 3) and Peak Angular Acceleration. We also measured scene-velocity thresholds for four phases of head motion: 1) the Start of the head turn, 2) the Center of the head turn, 3) the End of the head turn, 4) and All of the head turn. Scene-velocity thresholds increased as head motion increased for all tested conditions.

[19] Analyses of human sensitivity to redirected walking Human-calibrated interaction / Steinicke, Frank / Bruder, Gerd / Jerald, Jason / Frenz, Harald / Lappe, Markus Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology 2008-10-27 p.149-156
ACM Digital Library Link
Summary: Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. In a constant stimuli experiment with a two-alternative-forced-choice task we have quantified how much humans can unknowingly be redirected on virtual paths which are different from the paths they actually walk. 18 subjects have been tested in four different experiments: (E1a) discrimination between virtual and physical rotation, (E1b) discrimination between two successive rotations, (E2) discrimination between virtual and physical translation, and discrimination of walking direction (E3a) without and (E3b) with start-up. In experiment E1a subjects performed rotations to which different gains have been applied, and then had to choose whether or not the visually perceived rotation was greater than the physical rotation. In experiment E1b subjects discriminated between two successive rotations where different gains have been applied to the physical rotation. In experiment E2 subjects chose if they thought that the physical walk was longer than the visually perceived scaled travel distance. In experiment E3a subjects walked a straight path in the IVE which was physically bent to the left or to the right, and they estimate the direction of the curvature. In experiment E3a the gain was applied immediately, whereas the gain was applied after a start-up of two meters in experiment E3b. Our results show that users can be turned physically about 68% more or 10% less than the perceived virtual rotation, distances can be up- or down-scaled by 22%, and users can be redirected on an circular arc with a radius greater than 24 meters while they believe they are walking straight.

[20] Hybrid traveling in fully-immersive large-scale geographic environments Posters / Steinicke, Frank / Bruder, Gerd / Hinrichs, Klaus Proceedings of the 2007 ACM Symposium on Virtual Reality Software and Technology 2007-11-05 p.229-230
Keywords: hybrid traveling, navigation, virtual reality
ACM Digital Library Link
Summary: In this paper we present hybrid traveling concepts that enable users to navigate immersively through 3D geospatial environments displayed by arbitrary applications such as Google Earth or Microsoft Virtual Earth. We propose a framework which allows to integrate virtual reality (VR) based interaction devices and concepts into such applications that do not support VR technologies natively.
    In our proposed setup the content displayed by a geospatial application is visualized stereoscopically on a head-mounted display (HMD) for immersive exploration. The user's body is tracked in order to support natural traveling through the VE via a walking metaphor. Since the VE usually exceeds the dimension of the area in which the user can be tracked, we propose different strategies to map the user's movement into the virtual world intuitively. Moreover, commonly available devices and interaction techniques are presented for both-handed interaction to enrich the navigation process.

[21] Towards Applicable 3D User Interfaces for Everyday Working Environments 3D Interaction and 3D Interfaces / Steinicke, Frank / Ropinski, Timo / Bruder, Gerd / Hinrichs, Klaus Proceedings of IFIP INTERACT'07: Human-Computer Interaction 2007-09-10 v.1 p.546-559
Keywords: HCI; autostereoscopic display environments; 3D user interfaces
Link to Digital Content at Springer
Summary: Desktop environments represent a powerful user interface and have been used as the de facto standard human-computer interaction paradigm for over 20 years. But the rising demand of 3D applications dealing with complex datasets exceeds the capabilities of traditional interaction devices and two-dimensional displays. Such applications need more immersive and intuitive interfaces. In order to be accepted by the users, technology-driven solutions that require inconvenient instrumentation, e.g., stereo glasses or tracked gloves, should be avoided. Autostereoscopic display environments equipped with tracking systems enable humans to experience virtual 3D environments more naturally, for instance via gestures, without having to use annoying devices. However, currently these approaches are used only for specially designed or adapted applications. In this paper we introduce new 3D user interface concepts for such setups which require minimal instrumentation of the user and can be integrated easily in everyday working environments. We propose an interaction framework which supports simultaneous display of and simultaneous interaction with both monoscopic as well as stereoscopic contents. We identify the challenges for combined mouse-, keyboard- and gesture-based input paradigms in such an environment and introduce novel interaction strategies.

[22] 3D Modeling and Design Supported Via Interscopic Interaction Strategies Part VI: Advanced Design and Development Support / Steinicke, Frank / Ropinski, Timo / Bruder, Gerd / Hinrichs, Klaus HCI International 2007: 12th International Conference on Human-Computer Interaction, Part IV: HCI Applications and Services 2007-07-22 v.4 p.1160-1169
Keywords: HCI; autostereoscopic displays; 3D user interfaces; interscopic interaction techniques; 3D modeling and design
Link to Digital Content at Springer
Summary: 3D modeling applications are widely used in many application domains ranging from CAD to industrial or graphics design. Desktop environments have proven to be a powerful user interface for such tasks. However, the raising complexity of 3D dataset exceeds the possibilities provided by traditional devices or two-dimensional display. Thus, more natural and intuitive interfaces are required. But in order to get the users' acceptance technology-driven solutions that require inconvenient instrumentation, e.g., stereo glasses or tracked gloves, should be avoided. Autostereoscopic display environments in combination with 3D desktop devices enable users to experience virtual environments more immersive without annoying devices. In this paper we introduce interaction strategies with special consideration of the requirements of 3D modelers. We propose an interscopic display environment with implicated user interface strategies that allow displaying and interacting with both mono-, e.g., 2D elements, and stereoscopic content, which is beneficial for the 3D environment, which has to be manipulated. These concepts are discussed with special consideration of the requirements of 3D modeler and designers.

[23] Virtual Reflections and Virtual Shadows in Mixed Reality Environments Short Papers: 3D and Virtual Environments / Steinicke, F. / Hinrichs, K. / Ropinski, T. Proceedings of IFIP INTERACT'05: Human-Computer Interaction 2005-09-12 p.1018-1021
Link to Digital Content at SpringerLink
Summary: In this paper we propose the concepts of virtual reflections, lights and shadows to enhance immersion in mixed reality (MR) environments, which focus on merging the real and the virtual world seamlessly. To improve immersion, we augment the virtual objects with real world information regarding the virtual reality (VR) system environment, e.g., CAVE, workbench etc. Real-world objects such as input devices or light sources as well as the position and posture of the user are used to simulate global illumination phenomena, e.g., users can see their own reflections and shadows on virtual objects. Besides the concepts and the implementation of this approach, we describe the system setup and an example application for this kind of advanced MR system environment.