HCI Bibliography Home | HCI Conferences | SUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
SUI Tables of Contents: 1314

Proceedings of the 2014 ACM Symposium Spatial User Interaction

Fullname:SUI'14: Proceedings of the 2nd ACM Symposium on Spatial User Interaction
Editors:Andy Wilson; Frank Steinicke; Evan Suma; Wolfgang Stuerzlinger
Location:Honolulu, Hawaii
Dates:2014-Oct-04 to 2014-Oct-05
Publisher:ACM
Standard No:ISBN: 978-1-4503-2820-3; ACM DL: Table of Contents; hcibib: SUI14
Papers:44
Pages:162
Links:Conference Website
  1. Keynote address
  2. Flat surfaces in 3d space
  3. Spatial gestures
  4. Seeing, walking and being in spatial VEs
  5. Hybrid interaction spaces
  6. Spatial pointing and touching
  7. Poster session

Keynote address

The coming age of computer graphics and the evolution of language BIBAKFull-Text 1
  Ken Perlin
Sometime in the coming years -- whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown -- we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics.
   As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information [1]. Hand gestures will be able to produce visual artifacts.
   After these things come to pass, how will future generations of children evolve natural language itself [2]? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it?
   We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now -- without needing to wait for those smart contact lenses.
Keywords: Augmented reality; eccescopy; language; gesture

Flat surfaces in 3d space

Ethereal planes: a design framework for 2D information space in 3D mixed reality environments BIBAFull-Text 2-12
  Barrett Ens; Juan David Hincapié-Ramos; Pourang Irani
Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the framework's utility during the design of the Personal Cockpit, a window management system for head-worn displays.
Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments BIBAFull-Text 13-16
  Asier Marzo; Benoît Bossavit; Martin Hachet
Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations.
HOBS: head orientation-based selection in physical spaces BIBAFull-Text 17-25
  Ben Zhang; Yu-Hsiang Chen; Claire Tuna; Achal Dave; Yang Li; Edward Lee; Björn Hartmann
Emerging head-worn computing devices can enable interactions with smart objects in physical spaces. We present the iterative design and evaluation of HOBS -- a Head-Orientation Based Selection technique for interacting with these devices at a distance. We augment a commercial wearable device, Google Glass, with an infrared (IR) emitter to select targets equipped with IR receivers. Our first design shows that a naive IR implementation can outperform list selection, but has poor performance when refinement between multiple targets is needed. A second design uses IR intensity measurement at targets to improve refinement. To address the lack of natural mapping of on-screen target lists to spatial target location, our third design infers a spatial data structure of the targets enabling a natural head-motion based disambiguation. Finally, we demonstrate a universal remote control application using HOBS and report qualitative user impressions.
AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace BIBAFull-Text 26-29
  Austin Lee; Hiroshi Chigira; Sheng Kai Tang; Kojo Acquah; Hiroshi Ishii
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.

Spatial gestures

GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures BIBAFull-Text 30-39
  Sujin Jang; Niklas Elmqvist; Karthik Ramani
Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.
Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing BIBAFull-Text 40-49
  Barry Kollee; Sven Kratz; Anthony Dunnigan
It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs. non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.
VideoHandles: replicating gestures to search through action-camera video BIBAFull-Text 50-53
  Jarrod Knibbe; Sue Ann Seah; Mike Fraser
We present VideoHandles, a novel interaction technique to support rapid review of wearable video camera data by re-performing gestures as a search query. The availability of wearable video capture devices has led to a significant increase in activity logging across a range of domains. However, searching through and reviewing footage for data curation can be a laborious and painstaking process. In this paper we showcase the use of gestures as search queries to support review and navigation of video data. By exploring example self-captured footage across a range of activities, we propose two video data navigation styles using gestures: prospective gesture tagging and retrospective gesture searching. We describe VideoHandles' interaction design, motivation and results of a pilot study.

Seeing, walking and being in spatial VEs

Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays BIBAFull-Text 54-61
  Jason Orlosky; Qifan Wu; Kiyoshi Kiyokawa; Haruo Takemura; Christian Nitschke
A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a user's peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.
Human sensitivity to dynamic translational gains in head-mounted displays BIBAFull-Text 62-65
  Ruimin Zhang; Bochao Li; Scott A. Kuhl
Translational gains in head-mounted display (HMD) systems allow a user to walk at one rate in the real world while seeing themselves move at a faster or slower rate. Although several studies have measured how large gains must be for people to recognize them, little is known about how quickly the gains can be changed without people noticing. We conducted an experiment where participants were asked to walk on a straight path while wearing an HMD while we dynamically increased or decreased their virtual world translation speed. Participants indicated if their speed increased or decreased during their walk. In general, we found that the starting gain affected the detection and that, in most cases, there was little difference between gradual and instantaneous gain changes. The results of this work can help inform redirected walking implementations and other HMD applications where translational gains are not constant.
A self-experimentation report about long-term use of fully-immersive technology BIBAFull-Text 66-69
  Frank Steinicke; Gerd Bruder
Virtual and digital worlds have become an essential part of our daily life, and many activities that we used to perform in the real world such as communication, e-commerce, or games, have been transferred to the virtual world nowadays. This transition has been addressed many times by science fiction literature and cinematographic works, which often show dystopic visions in which humans live their lives in a virtual reality (VR)-based setup, while they are immersed into a virtual or remote location by means of avatars or surrogates. In order to gain a better understanding of how living in such a virtual environment (VE) would impact human beings, we conducted a self-experiment in which we exposed a single participant in an immersive VR setup for 24 hours (divided into repeated sessions of two hours VR exposure followed by ten minutes breaks), which is to our knowledge the longest documented use of an immersive VEs so far. We measured different metrics to analyze how human perception, behavior, cognition, and motor system change over time in a fully isolated virtual world.

Hybrid interaction spaces

Coordinated 3D interaction in tablet- and HMD-based hybrid virtual environments BIBAFull-Text 70-79
  Jia Wang; Robert Lindeman
Traditional 3D User Interfaces (3DUI) in immersive virtual reality can be inefficient in tasks that involve diversities in scale, perspective, reference frame, and dimension. This paper proposes a solution to this problem using a coordinated, tablet- and HMD-based, hybrid virtual environment system. Wearing a non-occlusive HMD, the user is able to view and interact with a tablet mounted on the non-dominant forearm, which provides a multi-touch interaction surface, as well as an exocentric God view of the virtual world. To reduce transition gaps across 3D interaction tasks and interfaces, four coordination mechanisms are proposed, two of which were implemented, and one was evaluated in a user study featuring complex level-editing tasks. Based on subjective ratings, task performance, interview feedback, and video analysis, we found that having multiple Interaction Contexts (ICs) with complementary benefits can lead to good performance and user experience, despite the complexity of learning and using the hybrid system. The results also suggest keeping 3DUI tasks synchronized across the ICs, as this can help users understand their relationships, smoothen within- and between-task IC transitions, and inspire more creative use of different interfaces.
Making VR work: building a real-world immersive modeling application in the virtual world BIBAFull-Text 80-89
  Mark Mine; Arun Yoganandan; Dane Coffey
Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.
T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation BIBAFull-Text 90-93
  David Lakatos; Matthew Blackshaw; Alex Olwal; Zachary Barryte; Ken Perlin; Hiroshi Ishii
T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.
RUIS: a toolkit for developing virtual reality applications with spatial interaction BIBAFull-Text 94-103
  Tuukka M. Takala
We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.

Spatial pointing and touching

Void shadows: multi-touch interaction with stereoscopic objects on the tabletop BIBAFull-Text 104-112
  Alexander Giesler; Dimitar Valkov; Klaus Hinrichs
In this paper we present the Void Shadows interaction -- a novel stereoscopic 3D interaction paradigm in which each virtual object casts a shadow on a touch-enabled display surface. The user can conveniently interact with such a shadow, and her actions are transferred to the associated object. Since all interactive tasks are carried out on the zero-parallax plane, there are no accommodation-convergence or related 2D/3D interaction problems, while the user is still able to "directly" manipulate objects at different 3D positions, without first having to position a cursor and to select an object. In an initial user study we have proved the applicability of the metaphor for some common tasks, and we have found that compared to in-air 3D interaction techniques the users performed up to 28% more precisely using about the same amount of time.
Object-based touch manipulation for remote guidance of physical tasks BIBAFull-Text 113-122
  Matt Adcock; Dulitha Ranatunga; Ross Smith; Bruce H. Thomas
This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.
Are 4 hands better than 2?: bimanual interaction for quadmanual user interfaces BIBAFull-Text 123-126
  Paul Lubos; Gerd Bruder; Frank Steinicke
The design of spatial user interaction for immersive virtual environments (IVEs) is an inherently difficult task. Missing haptic feedback and spatial misperception hinder an efficient direct interaction with virtual objects. Moreover, interaction performance depends on a variety of ergonomics factors, such as the user's endurance, muscular strength, as well as fitness. However, the potential benefits of direct and natural interaction offered by IVEs encourage research to create more efficient interaction methods. We suggest a novel way of 3D interaction by utilizing the fact that for many tasks, bimanual interaction shows benefits over one-handed interaction in a confined interaction space. In this paper we push this idea even further and introduce quadmanual user interfaces (QUIs) with two additional, virtual hands. These magic hands allow the user to keep their arms in a comfortable position yet still interact with multiple virtual interaction spaces. To analyze our approach we conducted a performance experiment inspired by a Fitts' Law selection task, investigating the feasibility of our approach for the natural interaction with 3D objects in virtual space.
Visual aids in 3D point selection experiments BIBAFull-Text 127-136
  Robert J. Teather; Wolfgang Stuerzlinger
We present a study investigating the influence of visual aids on 3D point selection tasks. In a Fitts' law pointing experiment, we compared the effects of texturing, highlighting targets upon being touched, and the presence of support cylinders intended to eliminate floating targets. Results of the study indicate that texturing and support cylinders did not significantly influence performance. Enabling target highlighting increased movement speed, while decreasing error rate. Pointing throughput was unaffected by this speed-accuracy tradeoff. Highlighting also eliminated significant differences between selection coordinate depth deviation and the deviation in the two orthogonal axes.

Poster session

Getting yourself superimposed on a presentation screen BIBAFull-Text 138
  Kenji Funahashi; Yusuke Nakae
When attending a conference some audiences lose attention following points on the screen. Although presenters usually use a pointer rod or a laser pointer, they are not convenient or easily visible on a large screen. A camera and another screen are also needed to show gestures. In this paper we propose using intuitive interface presentation support software. A presenter is superimposed onto a screen, and the person can draw there interactively. Realizing presenter movement on screen by recognizing natural and small actions, the person can move within a limited stage space. Presenters can point to any important areas and draw supplementary items with their own hand through our software, and of course show gestures on a large screen. It is expected that audiences will be better able to understand and focus.
Measurements of operating time in first and third person views using video see-through HMD BIBAFull-Text 139
  Takafumi Koike
We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.
Re:form: rapid designing system based on fusion and illusion of digital/physical models BIBAFull-Text 140
  Keiko Yamamoto; Ichiroh Kanaya; Monica Bordegoni; Umberto Cugini
Our goal is to allow the creators to focus on their creative activity, developing their ideas for physical products in an intuitive way. We propose a new CAD system allows users to draw virtual lines on the surface of the physical object using see-through AR, and also allows users to import 3D data and make its real object through 3D printing.
Supporting remote guidance through 3D annotations BIBAFull-Text 141
  Philipp Tiefenbacher; Tobias Gehrlich; Gerhard Rigoll; Takashi Nagamatsu
Remote guidance enables untrained users to solve complex tasks with the help of experts. These tasks often include the positioning of physical objects to certain poses. The expert indicates the final pose to the user. Therefore, the quality of annotations majorly influences the success of the remote collaboration. This work compares two kinds of annotation methods (2D and 3D) in two scenarios of different complexity. A pilot study indicates that 3D annotations reduce the execution time of the user in the complex scenario.
Simulator for developing gaze sensitive environment using corneal reflection-based remote gaze tracker BIBAFull-Text 142
  Takashi Nagamatsu; Michiya Yamamoto; Gerhard Rigoll
We describe a simulator for developing a gaze sensitive environment using a corneal reflection-based remote gaze tracker. The simulator can arrange cameras and IR-LEDs in 3D to check the measuring range to suit the target volume prior to implementation. We applied it to a museum showcase and a car.
Emotional space: understanding affective spatial dimensions of constructed embodied shapes BIBAFull-Text 143
  Edward Melcer; Katherine Isbister
We build upon recent research designing a constructive, multi-touch emotional assessment tool and present preliminary qualitative results from a Wizard of Oz study simulating the tool with clay. Our results showed the importance of emotionally contextualized spatial orientations, manipulations, and interactions of real world objects in the constructive process, and led to the identification of two new affective dimensions for the tool.
Augmented reality paper clay making based on hand gesture recognition BIBAFull-Text 144
  Pei-Ying Chiang; Wei-Yu Li
We propose a gesture-based 3D modeling system, which allows the user to create and sculpt a 3D model with hand-gestures. The goal of our system is to provide a more intuitive 3D user interface than the traditional 2D ones such as mouse or touch pad. Inspired by how people make paper clay, a series of hand gestures are designed for interacting with the 3D object and their corresponding mesh processing functions are developed. Thus, the user can create a desired virtual 3D object just like paper clay making.
Projection augmented physical visualizations BIBAFull-Text 145
  Simon Stusak; Markus Teufel
Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.
Using LEGO to model 3D tactile picture books by sighted children for blind children BIBAFull-Text 146
  Jeeeun Kim; Abigale Stangl; Tom Yeh
3D printing has shown great potential in creating tactile picture books for blind children to develop emergent literacy. Sighted children can be motivated to contribute to the modeling of more tactile picture books. But current 3D design tools are too difficult to use. Can sighted children model a tactile book by LEGO pieces instead? Can a LEGO be converted to a digital model that can be then printed?
Evaluating a SLAM-based handheld augmented reality guidance system BIBAFull-Text 147
  Jarkko Polvi; Takafumi Taketomi; Goshiro Yamamoto; Mark Billinghurst; Christian Sandor; Hirokazu Kato
In this poster we present the design and evaluation of a Handheld Augmented Reality (HAR) prototype system for guidance.
Natural pointing posture in distal pointing tasks BIBAFull-Text 148
  Heejin Kim; Seungjae Oh; Sung H. Han; Min K. Chung
In this poster, we present an experiment to capture user's natural pointing posture in distal pointing tasks at large displays and to examine the effect of pointing posture on the performance of distal pointing tasks. There were two types of pointing posture: stretched arm posture (69% of the participants) and bended arm posture (31% of the participants). The types did not affect movement angle, but affected angular error, task completion time and mean angular velocity.
Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration BIBAFull-Text 149
  Fuyang Huang; Zelong Sun; Qiang Xu; Felix Yim Binh Sze; Tang Wai Lan; Xiaogang Wang
We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.
Exploring tablet surrounding interaction spaces for medical imaging BIBAFull-Text 150
  Hanaë Rateau; Laurent Grisoni; Bruno Araujo
Medical imaging is essential to support most diagnosis. It often requires visualizing individual 2D slices from 3D volumetric datasets and switching between both representations. Combining an overview with a detailed view of the data [1] enables to keep the user in context when looking in detail at a slice. Given both their mobility and their adequacy to support direct manipulation, tablets are attractive devices to ease imaging analysis tasks. They have been successfully combined with tabletops [3], allowing new ways to explore volumetric data. However, while touch allows for a more direct manipulation, it suffers from the well-known fat finger problem which can interfere with the display, making it hard to understand subtle visual changes. To overcome this problem, we propose to explore the space around tablet devices. Such approach has been used for displays [2] to separate several workspaces of the desktop. Here, we use such space to invoke commands that are not required to be performed on the tablet, thus maximizing the visualization space during manipulations.
Proposing a classification model for perceptual target selection on large displays BIBAFull-Text 151
  Seungjae Oh; Heejin Kim; Hyo-Jeong So
In this research, we propose a linear SVM classification model for perceptual distal target selection on large displays. The model is based on two simple features of users' finger movements reflecting users' visual perception of targets. The model shows the accuracy of 92.78% for predicting an intended target at end point.
Investigating inertial measurement units for spatial awareness in multi-surface environments BIBAFull-Text 152
  Alaa Azazi; Teddy Seyed; Frank Maurer
In this work, we present an initial user study that explores the use of a dedicated inertial measurement unit (IMU) to achieve spatial awareness in Multi-surface Environments (MSE's). Our initial results suggest that measurements provided by an IMU may not provide value over sensor fusion techniques for spatially-aware MSE's, but warrant further exploration.
LeapLook: a free-hand gestural travel technique using the leap motion finger tracker BIBAFull-Text 153
  Robert Codd-Downey; Wolfgang Stuerzlinger
Contactless motion sensing devices enable a new form of input that does not encumber the user with wearable tracking equipment. We present a novel travel technique using the Leap Motion finger tracker which adopts a 2DOF steering metaphor used in traditional mouse and keyboard navigation in many 3D computer games.
Safe-&-round: bringing redirected walking to small virtual reality laboratories BIBAFull-Text 154
  Paul Lubos; Gerd Bruder; Frank Steinicke
Walking is usually considered the most natural form for self-motion in a virtual environment (VE). However, the confined physical workspace of typical virtual reality (VR) labs often prevents natural exploration of larger VEs. Redirected walking has been introduced as a potential solution to this restriction, but corresponding techniques often induce enormous manipulations if the workspace is considerably small and lacks natural experiences therefore. In this poster we propose the Safe-&-Round user interface, which supports natural walking in a potentially infinite virtual scene while confined to a considerably restricted physical workspace. This virtual locomotion technique relies on a safety volume, which is displayed as a semi-transparent half-capsule, inside which the user can walk without manipulations caused by redirected walking.
The significance of stereopsis and motion parallax in mobile head tracking environments BIBAFull-Text 155
  Paul Lubos; Dimitar Valkov
Despite 3D TVs and applications gaining popularity in recent years, 3D displays on mobile devices are rare. With low-cost head tracking solutions and first user interfaces available on smartphones, the question arises how effective the 3D impression through motion-parallax is and whether it is possible to achieve viable depth perception without binocular stereo cues. As motion parallax and stereopsis may be considered the most important depth cues, we developed an experiment comparing the user's depth perception utilizing head tracking with and without stereopsis.
Depth cues and mouse-based 3D target selection BIBAFull-Text 156
  Robert J. Teather; Wolfgang Stuerzlinger
We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.
An in-depth look at the benefits of immersion cues on spatial 3D problem solving BIBAFull-Text 157
  Cassandra Hoef; Jasmine Davis; Orit Shaer; Erin T. Solovey
We present a user study that takes an in-depth look at the effect of immersion cues on 3D spatial problem solving by combining traditional performance and experience measures with brain data.
HoloLeap: towards efficient 3D object manipulation on light field displays BIBAFull-Text 158
  Vamsi Kiran Adihikarla; Pawel Wozniak; Robert Teather
We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.
Real-time and robust grasping detection BIBAFull-Text 159
  Chih-Fan Chen; Ryan Spicer; Rhys Yahata; Mark Bolas; Evan Suma
Depth-based gesture cameras provide a promising and novel way to interface with computers. Nevertheless, this type of interaction remains challenging due to the complexity of finger interactions and the under large viewpoint variations. Existing middleware such as Intel Perceptual Computing SDK (PCSDK) or SoftKinetic IISU can provide abundant hand tracking and gesture information. However, the data is too noisy (Fig. 1, left) for consistent and reliable use in our application. In this work, we present a filtering approach that combines several features from PCSDK to achieve more stable hand openness and supports grasping interactions in virtual environments. Support vector machine (SVM), a machine learning method, is used to achieve better accuracy in a single frame, and Markov Random Field (MRF), a probability theory, is used to stabilize and smooth the sequential output. Our experimental results verify the effectiveness and the robustness of our method.
A raycast approach to hybrid touch / motion capture virtual reality user experience BIBAFull-Text 160
  Ryan P. Spicer; Rhys Yahata; Evan Suma; Mark Bolas
We present a novel approach to integrating a touch screen device into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environment with tracked head and hands.
Augmenting views on large format displays with tablets BIBAFull-Text 161
  Phil Lindner; Adolfo Rodriguez; Thomas D. Uram; Michael E. Papka; Michael Papka
Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.
Hidden UI: projection-based augmented reality for map navigation on multi-touch tabletop BIBAFull-Text 162
  Seungjae Oh; Hee-seung Kwon; Hyo-Jeong So
We present the development of the interactive system integrating multi-touch tabletop and projection-based Augmented Reality (AR). The integrated system supports the flexible presentation of multiple UI components, which is suitable for multi-touch tabletop environments displaying complex information at different layers.