| Frustum slicing | | BIBAK | Full-Text | 1-10 | |
| K. Bormann | |||
| In this paper visibility culling is integrated tightly with an octree data
structure. This is done by slicing the frustum in such a way that the minimal
distance from the eye to objects in a given frustum slice is twice the minimal
eye to object distance of the previous slice. Then, if one has a fixed minimal
detail size, i.e. a minimal spatial angle so that objects of lesser angular
extensions are not rendered to the screen then, going from one slice to the
next, objects must be twice as big in order to be rendered. This corresponds to
traversing the octree one level less deeply. Thus the minimal detail size
hugely cuts the number of nodes that the rendering algorithm must visit, a fact
that becomes even more pronounced when noting that small objects are far more
prevalent than are big objects. By splitting the frustum into focal and
peripheral frusta and, consequently, splitting frustum slices into focal and
peripheral ones, one can further take advantage of detail elision by rendering
objects far from the line of sight only to some larger minimal detail size. Keywords: Octree; Outdoor type VEs; Real-time rendering | |||
| Two-handed assembly with immersive task planning in virtual reality | | BIBAK | Full-Text | 11-20 | |
| H. Sun; B. Hujun | |||
| Assembly modelling is the process of capturing entities and activity
information related to assembling and assembly. Currently, most CAD systems
have been developed to ease the design of individual components, but are
limited in their support for assembly designs and planning capability, which
are crucial for reducing the cost and processing time in complex design,
constraint analysis and assembly task planning. This paper presents a framework
of a two-handed virtual assembly (VA) planner for assembly tasks, which
coordinates two hands jointly for feature-based manipulation, assembly analysis
and constraint-based task planning. Feature-based manipulation highlights the
important assembling features (e.g. dynamic reference frames, moving arrow,
mating features) to guide users for the ease of assembly and in an efficient
and fluid manner. The users can freely navigate and move the mating pair along
the collision-free path. The free motion of two-handed input in assembly is
further restricted to the allowable motion guided by the constraints recognised
on-line. The allowable motion in assembly is planned by the logic steps derived
from the analysis of constraints and their translation in the progress of
assembly. No preprocessing or predefined assembly sequence is necessary since
the planning is produced in real-time upon the two-handed interactions. Mating
features and constraints in databases are automatically updated after each
assembly to simplify the planning process. The two-handed task planner has been
developed and experimented for several assembly examples including a drill
(12-parts) and a robot (17-parts). The system can be generally applied for the
interactive task planning of assembly-type applications. Keywords: Two-handed interface; User interaction; Virtual assembly; Virtual reality | |||
| A VE framework to study visual perception and action | | BIBAK | Full-Text | 21-32 | |
| F. Panerai; M. Ehrette; P. Leboucher | |||
| In the real world, vision operates in harmony with self-motion yielding the
observer to unambiguous perception of the three-dimensional (3D) space. In
laboratory conditions, because of technical difficulties, researchers studying
3D perception have often preferred to use the substitute of a stationary
observer, somehow neglecting aspects of the action-perception cycle. Recent
results in visual psychophysics have proved that self-motion and visual
processes interact, leading the moving observer to interpret a 3D virtual scene
differently from a stationary observer. In this paper we describe a virtual
environment (VE) framework which presents very interesting characteristics for
designing experiments in visual perception during action. These characteristics
arise in a number of ways from the design of a unique motion capture device.
First, its accuracy and the minimal latency in position measurement; second,
its ease of use and the adaptability to different display interfaces. Such a VE
framework enables the experimenter to recreate stimulation conditions
characterised by a degree of sensory coherence typical of the real world.
Moreover, because of its accuracy and flexibility, the same device can be used
as a measurement tool to perform elementary but essential calibration
procedures. The VE framework has been used to conduct two studies which compare
the perception of 3D variables of the environment in moving and in stationary
observers under monocular vision. The first study concerns the perception of
absolute distance, i.e. the distance separating an object and the observer. The
second study refers to the perception of the orientation of a surface both in
the absence and presence of conflicts between static and dynamic visual cues.
In the two cases, the VE framework has enabled the design of optimal
experimental conditions, permitting light to be shed on the role of action in
3D visual perception. Keywords: Action; Motion capture; Virtual environments; 3D visual perception;
Visuo-motor coherence | |||
| WalkMap: Developing an augmented reality map application for wearable computers | | BIBAK | Full-Text | 33-44 | |
| J. Lehikoinen; R. Suomela | |||
| We have designed, implemented, and evaluated a map application for wearable
computer users. Our application, called WalkMap, is targeted at a walking user
in an urban environment, offering the user both navigational aids as well as
contextual information. WalkMap uses augmented reality techniques to display a
map on the surrounding area on the user's head-worn display. WalkMap is
constructed by the means of software development, user interface design and
evaluations, and existing knowledge on how humans use maps and navigate. The
key design driver in our approach is intuitivity of use. In this paper, we
present the design and implementation process of our application, considering
human-map interfaces, technical implementation, and human-computer interfaces.
We identify some of the key issues in these areas, and present the way they
have been solved. We also present some usability evaluation results. Keywords: Augmented reality; Context-awareness; Head-worn display; Human-computer
interaction; Human-map interface; Navigational map; Wearable computing | |||
| Context calibration | | BIBAK | Full-Text | 45-55 | |
| K. Bormann | |||
| The basic starting point of this paper is that 'context' constitutes most of
the user interface when doing VR-related experiments, but even so one bases
performance measures on only a few 'active' tasks. Thus, in order to
meaningfully compare results obtained in vastly different experiments one needs
to somehow 'subtract' the contribution to observables that are due to the
context. For the case where one is investigating whether changes in one
observable causes changes in another, a method, context calibration, is
proposed that does just that. This method is expected to, to a large extent,
factor out the part of one's results that are due to factors that are not
explicitly considered when evaluating the experiment, factors that the
experimenter might not even suspect influences the experiment. A procedure for
systematically investigating the theoretical assumptions underlying context
calibration is also discussed as is an initial experiment adhering to the
proposed methodology. Keywords: Context calibration; Experimental methodology; Performance-presence
relationship | |||
| A Virtual Reality Tool for Teleoperation Research | | BIBK | Full-Text | 57-62 | |
| N. Rodriguez; J.-P. Jessel; P. Torguet | |||
Keywords: Autonomous agents; Distributed environment; Robotics; Teleoperation; Test
bed; Virtual reality | |||
| Designing Virtual Environments to Support Cooperation in the Real World | | BIBK | Full-Text | 63-74 | |
| A. Crabtree; T. Rodden; J. Mariani | |||
Keywords: Cooperative work; Design; Ethnography; Evaluation; Material affordances;
Virtual environments | |||
| Simulating Self-Motion I: Cues for the Perception of Motion | | BIBK | Full-Text | 75-85 | |
| L. R. Harris; M. R. Jenkin; D. Zikovitz; F. Redlick; P. Jaekl | |||
Keywords: Proprioception; Self-motion; Visual and non-visual cues to motion | |||
| Simulating Self-Motion II: A Virtual Reality Tricycle | | BIBK | Full-Text | 86-95 | |
| R. S. Allison; L. R. Harris; A. R. Hogue; U. T. Jasiobedzka | |||
Keywords: Self-motion simulation; Visual and vestibular egomotion cues | |||
| Development of a Learning-Training Simulator with Virtual Functions for Lathe Operations | | BIBK | Full-Text | 96-104 | |
| Z. Li; H. Qiu; Y. Yue | |||
Keywords: Lathe; Machining Operations; Simulator; Skill Training; Virtual Reality | |||
| Introduction | | BIB | Full-Text | 105-106 | |
| D. A. Bowman; M. Billinghurst | |||
| Experiments with Face-To-Face Collaborative AR Interfaces | | BIBAK | Full-Text | 107-121 | |
| M. Billinghurst; H. Kato; K. Kiyokawa; D. Belcher; I. Poupyrev | |||
| We describe a design approach, Tangible Augmented Reality, for developing
face-to-face collaborative Augmented Reality (AR) interfaces. Tangible
Augmented Reality combines Augmented Reality techniques with Tangible User
Interface elements to create interfaces in which users can interact with
spatial data as easily as real objects. Tangible AR interfaces remove the
separation between the real and virtual worlds, and so enhance natural
face-to-face communication. We present several examples of Tangible AR
interfaces and results from a user study that compares communication in a
collaborative AR interface to more traditional approaches. We find that in a
collaborative AR interface people use behaviours that are more similar to
unmediated face-to-face collaboration than in a projection screen interface. Keywords: Augmented reality; Collaboration; Communication; Usability evaluation | |||
| Novel Uses of Pinch Gloves™ for Virtual Environment Interaction Techniques | | BIBAK | Full-Text | 122-129 | |
| D. A. Bowman; C. A. Wingrave; J. M. Campbell; V. Q. Ly; C. J. Rhoton | |||
| The usability of three-dimensional (3D) interaction techniques depends upon
both the interface software and the physical devices used. However, little
research has addressed the issue of mapping 3D input devices to interaction
techniques and applications. This is especially crucial in the field of Virtual
Environments (VEs), where there exists a wide range of potential 3D input
devices. In this paper, we discuss the use of Pinch Gloves™ -- gloves
that report contact between two or more fingers -- as input devices for VE
systems. We begin with an analysis of the advantages and disadvantages of the
gloves as a 3D input device. Next, we present a broad overview of three novel
interaction techniques we have developed using the gloves, including a menu
system, a text input technique, and a two-handed navigation technique. All
three of these techniques have been evaluated for both usability and task
performance. Finally, we speculate on further uses for the gloves. Keywords: 3D Input Devices; Interaction Techniques; Pinch Gloves; Usability
Evaluation; Virtual Environments | |||
| Handling of Virtual Contact in Immersive Virtual Environments: Beyond Visuals | | BIBAK | Full-Text | 130-139 | |
| R. W. Lindeman; J. N. Templeman; J. L. Sibert; J. R. Cutler | |||
| This paper addresses the issue of improving the perception of contact that
users make with purely virtual objects in virtual environments. Because these
objects have no physical component, the user's perceptual understanding of the
material properties of the object, and of the nature of the contact, is
hindered, often limited solely to visual feedback. Many techniques for
providing haptic feedback to compensate for the lack of touch in virtual
environments have been proposed. These systems have increased our understanding
of the nature of how humans perceive contact. However, providing effective,
general-purpose haptic feedback solutions has proven elusive. We propose a
more-holistic approach, incorporating feedback to several modalities in
concert. This paper describes a prototype system we have developed for
delivering vibrotactile feedback to the user. The system provides a low-cost,
distributed, portable solution for incorporating vibrotactile feedback into
various types of systems. We discuss different parameters that can be
manipulated to provide different sensations, propose ways in which this
feedback can be combined with feedback of other modalities to create a better
understanding of virtual contact, and describe possible applications. Keywords: Haptic; Multimodal; Vibrotactile | |||
| An Investigation of Visual Cues used to Create and Support Frames of Reference and Visual Search Tasks in Desktop Virtual Environments | | BIBAK | Full-Text | 140-150 | |
| S. S. Morar; R. D. Macredie; T. Cribbin | |||
| Visual depth cues are combined to produce the essential depth and
dimensionality of Desktop Virtual Environments (DVEs). This study discusses
DVEs in terms of the visual depth cues that create and support perception of
frames of references and accomplishment of visual search tasks. This paper
presents the results of an investigation that identifies the effects of the
experimental stimuli positions and visual depth cues: luminance, texture,
relative height and motion parallax on precise depth judgements made within a
DVE. Results indicate that the experimental stimuli positions significantly
affect precise depth judgements, texture is only significantly effective for
certain conditions, and motion parallax, in line with previous results, is
inconclusive to determine depth judgement accuracy for egocentrically viewed
DVEs. Results also show that exocentric views, incorporating relative height
and motion parallax visual cues, are effective for precise depth judgements
made in DVEs. The results help us to understand the effects of certain visual
depth cues to support the perception of frames of references and precise depth
judgements, suggesting that the visual depth cues employed to create frames of
references in DVEs may influence how effectively precise depth judgements are
undertaken. Keywords: Depth Perception; Desktop Virtual Environments; Frames of Reference; Motion
Parallax; Visual Depth Cues; Visual Search Tasks | |||
| MagicMeeting: A Collaborative Tangible Augmented Reality System | | BIBAK | Full-Text | 151-166 | |
| H. T. Regenbrecht; M. Wagner; G. Baratoff | |||
| We describe an augmented reality (AR) system that allows multiple
participants to interact with 2D and 3D data using tangible user interfaces.
The system features face-to-face communication, collaborative viewing and
manipulation of 3D models, and seamless access to 2D desktop applications
within the shared 3D space. All virtual content, including 3D models and 2D
desktop windows, is attached to tracked physical objects in order to leverage
the efficiencies of natural two-handed manipulation. The presence of 2D desktop
space within 3D facilitates data exchange between the two realms, enables
control of 3D information by 2D applications, and generally increases
productivity by providing access to familiar tools. We present a general
concept for a collaborative tangible AR system, including a comprehensive set
of interaction techniques, a distributed hardware setup, and a component-based
software architecture that can be flexibly configured using XML. We show the
validity of our concept with an implementation of an application scenario from
the automotive industry. Keywords: Augmented reality; Collaboration; CSCW; Tangible user interfaces; 3D user
interfaces | |||
| Glove Based User Interaction Techniques for Augmented Reality in an Outdoor Environment | | BIBAK | Full-Text | 167-180 | |
| B. H. Thomas; W. Piekarski | |||
| This paper presents a set of pinch glove-based user interface tools for an
outdoor wearable augmented reality computer system. The main form of user
interaction is the use of hand and head gestures. We have developed a set of
augmented reality information presentation techniques. To support direct
manipulation, the following three selection techniques have been implemented:
two-handed framing, line of sight and laser beam. A new glove-based text entry
mechanism has been developed to support symbolic manipulation. A scenario for a
military logistics task is described to illustrate the functionality of this
form of interaction. Keywords: Augmented reality; Glove based interaction; User interactions; Wearable
computers | |||
| MAS Dynamics on Stage | | BIBAK | Full-Text | 181-195 | |
| N. Baerten; P. J. Braspenning | |||
| We aim to explore a new kind of user interface solely dedicated to
presenting the global inner dynamics of Multi-Agent Systems (MASs) on a higher
level. Being complex systems, MASs are often only or barely understood by their
designer(s). Could we enable a particular kind of agent to tell the designer's
story about the agent- and human-interactive behaviours taking place inside the
MAS in a meaningful way? Could a set of virtual actors turn this story into a
'dynamic user-experience'? After taking a closer look at how to gain insight
into complex systems and why to pick a drama-based approach, we present the
three pillars of perception, interpretation and presentation, on which we base
our visualisation efforts. We show how they relate to various research fields
such as agent technology, human computer interaction, psychology, computer
graphics, animation, drama, body language and cognitive ergonomics. Based upon
these insights we introduce a conceptual model for drama-based visualisation,
followed by a stepwise description of how this framework could be applied. The
article closes with some reflections and conclusions. Keywords: Conceptual Model; Drama; Intelligent Interfaces; Multi-Agent Systems;
Virtual Actors | |||
| Polyhedral Objects Metamorphosis Using Convex Decomposition and Morphology | | BIBAK | Full-Text | 196-204 | |
| Wen-Yu Liu; Hua Li; Fei Wang; Guang-Xi Zhu | |||
| A new technique is presented for computing continuous shape transformations
between polyhedral objects. The polyhedron shape transformations can be divided
into polyhedron metamorphosis and bi-directional local rigid body rotation
transformation. By decomposing two objects into sets of individual convex
sub-objects respectively, and establishing the matching between two subsets,
the approach can solve the metamorphosis problem of two non-homotopic objects
(including concave objects and holey objects). Compared with other methods,
this metamorphosis algorithm can be executed automatically for arbitrary
polyhedrons and no need user interaction. The user has the ability to choose an
automatic matching or to select interactively pairs of corresponding matching
convex subsets to obtain special effects. Experiments show that this method can
generate natural, high-fidelity, eye-pleasing metamorphosis results with simple
computation. Keywords: Convex Decomposition; Generalised Morphing; Local Rotation; Metamorphosis | |||
| Material-Discontinuity Preserving Progressive Mesh Using Vertex-Collapsing Simplification | | BIBAK | Full-Text | 205-216 | |
| Shu-Kai Yang; Jung-Hong Chuang | |||
| Level Of Detail (LOD) modelling or mesh reduction has been found useful in
interactive walkthrough applications. Progressive meshing techniques based on
edge or triangle collapsing have been recognised useful in continuous LOD,
progressive refinement, and progressive transmission. We present a
vertex-collapsing mesh reduction scheme that effectively takes shape and
feature preserving as well as material-discontinuity preserving into account,
and produces a progressive mesh which generally has more vertices collapsed
between adjacent levels of detail than methods based on edge-collapsing and
triangle collapsing. Keywords: Level Of Detail; Material-Discontinuity Preserving; Progressive Mesh;
Topology Simplification; Vertex Collapsing; Virtual Reality | |||
| A Novel Seven Degree of Freedom Haptic Device for Engineering Design | | BIBAK | Full-Text | 217-228 | |
| S. Kim; J. J. Berkley; M. Sato | |||
| In this paper, the authors intend to demonstrate a new intuitive
force-feedback device that is ideally suited for engineering design. Force
feedback for the device is tension-based and is characterised by 7 degrees of
freedom (3 DOF for translation, 3 DOF for rotation, and 1 DOF for grasp). The
SPIDAR-G (SPace Interface Device for Artificial Reality with Grip) allows users
to interact with virtual objects naturally by manipulating two hemispherical
grips located in the centre of a device frame. Force feedback is achieved by
controlling tension in cables that are connected between a grip and motors
located at the corners of a frame. Methodologies will be discussed for
displaying force and calculating translation, orientation and grasp using the
length of 8 connecting cables. The SPIDAR-G is characterised by smooth force
feedback, minimised inertia, no backlash, scalability and safety. Such features
are attributed to strategic cable arrangement and control that results in
stable haptic rendering. Experimental results validate the feasibility of the
proposed device and example applications are described. Keywords: CAD; Force Feedback; Haptics; Virtual Design | |||
| Methods and Algorithms for Constraint-based Virtual Assembly | | BIBAK | Full-Text | 229-243 | |
| Y. Wang; U. Jayaram; S. Jayaram; S. Imtiyaz | |||
| Constraint-based simulation is a fundamental concept used for assembly in a
virtual environment. The constraints (axial, planer, etc.) are extracted from
the assembly models in the CAD system and are simulated during the virtual
assembly operation to represent the real world operations. In this paper, we
present the analysis of 'combinations' and 'order of application' of axial and
planar constraints used in assembly. Methods and algorithms for checking and
applying the constraints in the assembly operation are provided. An
object-oriented model for managing these constraints in the assembly operation
is discussed. Keywords: Virtual Assembly; Assembly Constraints; Constrained Motion Simulation | |||
| A Virtual Environment for the Design and Simulated Construction of Prefabricated Buildings | | BIBAK | Full-Text | 244-256 | |
| Norman Murray; Terrence Fernando; Ghassan Aouad | |||
| The construction industry has acknowledged that its current working
practices are in need of substantial improvements in quality and efficiency and
has identified that computer modelling techniques and the use of prefabricated
components can help reduce times, costs, and minimise defects and problems of
on-site construction. This paper describes a virtual environment to support the
design and construction processes of buildings from prefabricated components
and the simulation of their construction sequence according to a project
schedule. The design environment can import a library of 3-D models of
prefabricated modules that can be used to interactively design a building.
Using Microsoft Project, the construction schedule of the designed building can
be altered, with this information feeding back to the construction simulation
environment. Within this environment the order of construction can be
visualised using virtual machines. Novel aspects of the system are that it
provides a single 3-D environment where the user can construct their design
with minimal user interaction through automatic constraint recognition and view
the real-time simulation of the construction process within the environment.
This takes this area of research a step forward from other systems that only
allow the planner to view the construction at certain stages, and do not
provide an animated view of the construction process. Keywords: Modular Construction; Prefabricated Components; Virtual Construction
Environment | |||
| Aspects of Haptic Feedback in a Multi-modal Interface for Object Modelling | | BIBAK | Full-Text | 257-270 | |
| J. De Boeck; C. Raymaekers; K. Coninx | |||
| In our everyday life, interaction with the world consists of a complex
mixture of audio (speech and sounds), vision and touch. Hence, we may conclude
that the most natural means of human communication is multi-modal. Our overall
research goal is to develop a natural 3D human-computer interaction framework
for modelling purposes without mouse or keyboard and where many different
sensing modalities will be used simultaneously and cooperatively. This article
will focus on the various interface issues on the way to an intuitive
environment in which one or more users can model their prototypes in a natural
manner. Some technical framework decisions, such as messaging and network
systems, will also be investigated. Keywords: Multi-modal Interface; Force Feedback; Haptic Feedback; Virtual Modelling | |||