HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 040506070809101112131415

Proceedings of the 2014 Conference on Graphics Interface

Fullname:Proceedings of the 2014 Graphics Interface Conference
Editors:Paul G. Kry; Andrea Bunt
Location:Montreal, Quebec, Canada
Dates:2014-May-07 to 2014-May-09
Publisher:ACM
Standard No:ISBN: 978-1-4822-6003-8; ACM DL: Table of Contents; hcibib: GI14
Papers:29
Pages:233
Links:Conference Website
  1. Invited paper
  2. Physics and collision
  3. Input techniques
  4. Real-time rendering
  5. Video and collaboration
  6. Visualization
  7. Understanding users: inking, perception and adaptation
  8. Geometry, sketching, and BRDFs

Invited paper

Visual models and ontologies BIBAFull-Text 1-7
  Eugene Fiume
Realistic computer graphics will change the way people think and communicate. Achieving deeper success as a ubiquitous medium will require a more resonant understanding of visual modelling that must embrace mathematical, philosophical, cultural, perceptual and social aspects. With an interleaved understanding, people will be able to create visual ontologies that better align to their expressive needs. In turn, this will naturally lead to ubiquitous supporting technologies. First we need good visual models. A model induces an ontology of things that inevitably omits aspects of the phenomenon, whether desired or not. Thus modelling a model's incompleteness is crucial, for it allows us to account for artifacts, errors, and ontological surprises such as the "uncanny valley". Over the years, my choice of tools to model models has been mathematics. In this paper, I will speak to how little progress we have made and how much broader our investigation must be.

Physics and collision

Task efficient contact configurations for arbitrary virtual creatures BIBAFull-Text 9-16
  Steve Tonneau; Julien Pettré; Franck Multon
A common issue in three-dimensional animation is the creation of contacts between a virtual creature and the environment. Contacts allow force exertion, which produces motion. This paper addresses the problem of computing contact configurations allowing to perform motion tasks such as getting up from a sofa, pushing an object or climbing. We propose a two-step method to generate contact configurations suitable for such tasks. The first step is an offline sampling of the reachable workspace of a virtual creature. The second step is a run time request confronting the samples with the current environment. The best contact configurations are then selected according to a heuristic for task efficiency. The heuristic is inspired by the force transmission ratio. Given a contact configuration, it measures the potential force that can be exerted in a given direction. Our method is automatic and does not require examples or motion capture data. It is suitable for real time applications and applies to arbitrary creatures in arbitrary environments. Various scenarios (such as climbing, crawling, getting up, pushing or pulling objects) are used to demonstrate that our method enhances motion autonomy and interactivity in constrained environments.
Seamless adaptivity of elastic models BIBAFull-Text 17-24
  Maxime Tournier; Matthieu Nesme; Francois Faure; Benjamin Gilles
A new adaptive model for viscoelastic solids is presented. Unlike previous approaches, it allows seamless transitions, and simplifications in deformed states. The deformation field is generated by a set of physically animated frames. Starting from a fine set of frames and mechanical energy integration points, the model can be coarsened by attaching frames to others, and merging integration points. Since frames can be attached in arbitrary relative positions, simplifications can occur seamlessly in deformed states, without returning to the original shape, which can be recovered later after refinement. We propose a new class of velocity-based simplification criterion based on relative velocities. Integration points can be merged to reduce the computation time even more, and we show how to maintain constant elastic forces through the levels of detail. This meshless adaptivity allows significant improvements of computation time.
Efficient collision detection while rendering dynamic point clouds BIBAFull-Text 25-33
  Mohamed Radwan; Stefan Ohrhallinger; Michael Wimmer
A recent trend in interactive environments is the use of unstructured and temporally varying point clouds. This is driven by both affordable depth cameras and augmented reality simulations. One research question is how to perform collision detection on such point clouds. State-of-the-art methods for collision detection create a spatial hierarchy in order to capture dynamic point cloud surfaces, but they require O(NlogN) time for N points. We propose a novel screen-space representation for point clouds which exploits the property of the underlying surface being 2D. In order for dimensionality reduction, a 3D point cloud is converted into a series of thickened layered depth images. This data structure can be constructed in O(N) time and allows for fast surface queries due to its increased compactness and memory coherency. On top of that, parts of its construction come for free since they are already handled by the rendering pipeline. As an application we demonstrate online collision detection between dynamic point clouds. It shows superior accuracy when compared to other methods and robustness to sensor noise since uncertainty is hidden by the thickened boundary.
Signed distance fields for polygon soup meshes BIBAFull-Text 35-41
  Hongyi Xu; Jernej Barbic
Many meshes in computer animation practice are meant to approximate solid objects, but the provided triangular geometry is often unoriented, non-manifold or contains self-intersections, causing inside/outside of objects to be mathematically ill-defined. We describe a robust and efficient automatic approach to define and compute a signed distance field for arbitrary triangular geometry. Starting with arbitrary (non-manifold) triangular geometry, we first define and extract an offset manifold surface using an unsigned distance field. We then automatically remove any interior surface components. Finally, we exploit the manifoldness of the offset surface to quickly detect interior distance field grid points. We prove that exterior grid points can reuse a shifted original unsigned distance field, whereas for interior cells, we compute the signed field from the offset surface geometry. We demonstrate improved performance both using exact distance fields computed using an octree, and approximate distance fields computed using fast marching. We analyze the time and memory costs for complex meshes that include self-intersections and non-manifold geometry. We demonstrate the effectiveness of our algorithm by using the signed distance field for collision detection and generation of tetrahedral meshes for physically based simulation.

Input techniques

Experimental study of stroke shortcuts for a touchscreen keyboard with gesture-redundant keys removed BIBAFull-Text 43-50
  Ahmed Sabbir Arif; Michel Pahud; Ken Hinckley; Bill Buxton
We present experimental results for two-handed typing on a graphical Qwerty keyboard augmented with linear strokes for Space, Backspace, Shift, and Enter -- that is, swipes to the right, left, up, and diagonally down-left, respectively. A first study reveals that users are more likely to adopt these strokes, and type faster, when the keys corresponding to the strokes are removed from the keyboard, as compared to an equivalent stroke-augmented keyboard with the keys intact. A second experiment shows that the keys-removed design yields 16% faster text entry than a standard graphical keyboard for phrases containing mixed-case alphanumeric and special symbols, without increasing error rate. Furthermore, the design is easy to learn: users exhibited performance gains almost immediately, and 90% of test users indicated they would want to use it as their primary input method.
Position vs. velocity control for tilt-based interaction BIBAFull-Text 51-58
  Robert J. Teather; I. Scott MacKenzie
Research investigating factors in the design of tilt-based interfaces is presented. An experiment with 16 participants used a tablet and a 2D pointing task to compare position-control and velocity-control using device tilt to manipulate an on-screen cursor. Four selection modes were also evaluated, ranging from instantaneous selection upon hitting a target to a 500-ms time delay prior to selection. Results indicate that position-control was approximately 2× faster than velocity-control, regardless of selection delay. Position-control had higher pointing throughput (3.3 bps vs. 1.2 bps for velocity-control), more precise cursor motion, and was universally preferred by participants.
The performance of un-instrumented in-air pointing BIBAFull-Text 59-66
  Michelle A. Brown; Wolfgang Stuerzlinger; E. J. Mendonça Filho
We present an analysis of in-air finger and hand controlled object pointing and selection. The study used a tracking system that required no instrumentation on the user. We compared the performance of the two pointing methods with and without elbow stabilization and found that the method that yielded the best performance varied for each participant, such that there was no method that performed significantly better than all others. We also directly compared user performance between un-instrumented in-air pointing and the mouse. We found that the un-instrumented in-air pointing performed significantly worse, at less than 75% of mouse throughput. Yet, the larger range of applications for un-instrumented 3D hand tracking makes this technology still an attractive option for user interfaces.
A natural click interface for AR systems with a single camera BIBAFull-Text 67-75
  Atsushi Sugiura; Masahiro Toyoura; Xiaoyang Mao
Clicking on a virtual object is the most fundamental and important interaction in augmented reality (AR). However, existing AR systems do not support natural click interfaces, because head-mounted displays with only one camera are usually adopted to realize augmented reality and it is difficult to recognize an arbitrary gesture without accurate depth information. For the ease of detection, some systems force users to make unintuitive gestures, such as pinching with the thumb and forefinger. This paper presents a new natural click interface for AR systems. Through a study investigating how users intuitively click virtual objects in AR systems, we found that the speed and acceleration of fingertips provide cues for detecting click gestures. Based on our findings, we developed a new technique for recognizing natural click gestures with a single camera by focusing on temporal differentials between adjacent frames. We further validated the effectiveness of the recognition algorithm and the usability of our new interface through experiments.

Real-time rendering

Using stochastic sampling to create depth-of-field effect in real-time direct volume rendering BIBAFull-Text 77-85
  AmirAli Sharifi; Pierre Boulanger
Real-time visualization of volumetric data is increasingly used by physicians and scientists. Enhanced depth perception in Direct Volume Rendering (DVR) plays a crucial role in applications such as clinical decision making. Our goal is to devise a flexible blurring method in DVR and ultimately improve depth perception in real-time DVR using synthetic depth of field (DoF) effect. We devised a permutation-based stochastic sampling method for ray casting to render images with DoF effect. Our method uses 2D blurring kernels in 3D space for each sample on a ray. Furthermore, we reduce the number of required samples for each kernel of size n2 from n2 to only 2 samples. This method is flexible and can be used for DoF, focus-context blurring, selective blurring, and potentially for other photographic effects such as the tilt effect.
Interactive light scattering with principal-ordinate propagation BIBAFull-Text 87-94
  Oskar Elek; Tobias Ritschel; Carsten Dachsbacher; Hans-Peter Seidel
Efficient light transport simulation in participating media is challenging in general, but especially if the medium is heterogeneous and exhibits significant multiple anisotropic scattering. We present a novel finite-element method that achieves interactive rendering speeds on modern GPUs without imposing any significant restrictions on the rendered participated medium. We achieve this by dynamically decomposing all illumination into directional and point light sources, and propagating the light from these virtual sources in independent discrete propagation volumes. These are individually aligned with approximate principal directions of light propagation from the respective light sources. Such decomposition allows us to use a very simple and computationally efficient unimodal basis for representing the propagated radiance, instead of using a general basis such as Spherical Harmonics. The presented approach is biased but physically plausible, and largely reduces rendering artifacts inherent to standard finite-element methods while allowing for virtually arbitrary scattering anisotropy and other properties of the simulated medium, without requiring any precomputation.
Micro-buffer rasterization reduction method for environment lighting using point-based rendering BIBAFull-Text 95-102
  Takahiro Harada
This paper proposes a point-based rendering pipeline for indirect illumination for environment lighting. The method improves the efficiency of the algorithm used in the previous studies that calculate direct illumination on all the points in the scene which is prohibitively expensive for environment lighting because it requires a hemispherical integration for each point to compute direct illumination. The proposed rendering pipeline reduces the number of direct illumination computations by introducing approximations and careful selection of points on which indirect illumination is calculated. More specifically, the rendering pipeline first selects primary visible points on which indirect illumination is calculated from the point hierarchy. A micro-buffer is rasterized for each primary visible point to identify secondary visible points whose direct illumination affects indirect illumination of a primary visible point. Dependency of those points is analyzed and approximations are introduced to reduce the number of points on which micro-buffer rasterization is executed to calculate direct illumination. After direct illumination is obtained for those points, direct illumination on all of the primary and secondary visible points is calculated by illumination propagation without rasterizing any micro-buffer. The method can be used for a dynamic scene if it is combined with dynamic point hierarchy update.
Variable-sized, circular bokeh depth of field effects BIBAFull-Text 103-107
  Johannes Moersch; Howard J. Hamilton
We propose the Flexible Linear-time Area Gather (FLAG) blur algorithm with a variable-sized, circular bokeh for producing depth of field effects on rasterized images. The algorithm is separable (and thus linear). Bokehs can be of any convex shape including circles. The goal is to create a high quality bokeh blur effect by post processing images rendered in real-time by a 3D graphics system. Given only depth and colour information as input, the method performs three passes. The circle of confusion pass calculates the radius of the blur at each pixel and packs the input buffer for the next pass. The horizontal pass samples pixels across each row and outputs a 3D texture packed with blur information. The vertical pass performs a vertical gather on this 3D texture to produce the final blurred image. The time complexity of the algorithm is linear with respect to the maximum radius of the circle of confusion, which compares favorably with the naive algorithm, which is quadratic. The space complexity is linear with respect to the maximum radius of the circle of confusion. The results of our experiments show that the algorithm generates high quality blurred images with variable-sized circular bokehs. The implemented version of the proposed algorithm is consistently faster in practice than the implemented naive algorithm. Although some previous algorithms have provided linear performance scaling and variable sized bokehs, the proposed algorithm provides these while also permitting more flexibility in the allowed blur shapes, including any convex shape.

Video and collaboration

Casual authoring using a video navigation history BIBAFull-Text 109-114
  Matthew Fong; Abir Al Hajri; Gregor Miller; Sidney Fels
We propose the use of a personal video navigation history, which records a user's viewing behaviour, as a basis for casual video editing and sharing. Our novel interaction supports users' navigation of previously-viewed intervals to construct new videos via simple playlists. The intervals in the history can be individually previewed and searched, filtered to identify frequently-viewed sections, and added to a playlist from which they can be refined and re-ordered to create new videos. Interval selection and playlist creation using a history-based interaction is compared to a more conventional filmstrip-based technique. Using our novel interaction participants took at most two-thirds the time taken by the conventional method, and we found users gravitated towards using a history-based mechanism to find previously-viewed intervals compared to a state-of-the-art video interval selection method. Our study concludes that users are comfortable using a video history, and are happy to re-watch interesting parts of video to utilize the history's advantages in an authoring context.
VisionSketch: integrated support for example-centric programming of image processing applications BIBAFull-Text 115-122
  Jun Kato; Takeo Igarashi
We propose an integrated development environment (IDE) called "VisionSketch", which supports example-centric programming for easily building image processing pipelines. With VisionSketch, a programmer is first asked to select the input video. Then, he can start building the pipeline with a visual programming language that provides immediate graphical feedback for algorithms applied to the video. He can also use a text-based editor to create or edit the implementation of each algorithm. During the development, the pipeline is always ready for execution with a video player-like interface enabling rapid iterative prototyping. In a preliminary user study, VisionSketch was positively received by five programmers, who had prior experience of writing text-based image processing programs and could successfully build interesting applications.
Fast forward with your VCR: visualizing single-video viewing statistics for navigation and sharing BIBAFull-Text 123-128
  Abir Al-Hajri; Matthew Fong; Gregor Miller; Sidney Fels
Online video viewing has seen explosive growth, yet simple tools to facilitate navigation and sharing of the large video space have not kept pace. We propose the use of single-video viewing statistics as the basis for a visualization of video called the View Count Record (VCR). Our novel visualization utilizes variable-sized thumbnails to represent the popularity (or affectiveness) of video intervals, and provides simple mechanisms for fast navigation, informed search, video previews, simple sharing and summarization. The viewing statistics are generated from an individual's video consumption, or crowd-sourced from many people watching the same video; both provide different scenarios for application (e.g. implicit tagging of interesting events for an individual, and quickly navigating to others' most-viewed scenes for crowd-sourced). A comparative user study evaluates the effectiveness of the VCR by asking participants to share previously-seen affective parts within videos. Experimental results demonstrate that the VCR outperforms the state-of-the-art in a search task, and has been welcomed as a recommendation tool for clips within videos (using crowd-sourced statistics). It is perceived by participants as effective, intuitive and strongly preferred to current methods.
Supervisor-student research meetings: a case study on choice of tools and practices in computer science BIBAFull-Text 129-135
  Hasti Seifi; Helen Halbert; Joanna McGrenere
Supervisory meetings are a crucial aspect of graduate studies and have a strong impact on the success of research and supervisor-student relations, yet there is little research on supporting this relationship and even less on understanding the nature of this collaboration and user requirements. Thus, we conducted an exploratory study on the choice and success of tools and practices used by supervisors and students for meetings, for the purpose of making informed design recommendations. Results of a series of five focus groups and three individual interviews yielded three themes on: 1) supervisory style diversity, 2) distributed cognition demands, and 3) feedback channel dissonance. Student-supervisor collaboration has many unexplored areas for design and as a first step our work highlights potential areas for supportive designs and future research.

Visualization

Visualizing aerial LiDAR cities with hierarchical hybrid point-polygon structures BIBAFull-Text 137-144
  Zhenzhen Gao; Luciano Nocera; Miao Wang; Ulrich Neumann
This paper presents a visualization framework for cities in the form of aerial LiDAR (Light Detection and Ranging) point clouds. To provide interactive rendering for large data sets, the framework combines level-of-detail (LOD) technique with hierarchical hybrid representations of both point and polygon of the scene. The supporting structure for LOD is a multi-resolution quadtree (MRQ) hierarchy that is built purely out of input points. Each MRQ node stores separately a continuous data set for ground and building points that are sampled from continuous surfaces, and a discrete data set for independent tree points. The continuous data is first augmented with vertical quadrilateral building walls that are missing in original points owing to the 2.5D nature of aerial LiDAR. The continuous data is then spatially partitioned into same size subsets, based on which hybrid point-polygon structures are hierarchically constructed. Specifically, a polygon conversion operation replaces points of a subset forming a planar surface to a quadrilateral covering the same space, and a polygon simplification operation decimates wall quadrilaterals of a subset sharing the same plane to a single compact quadrilateral. Interactive hybrid visualization is retained by adapting a hardware-accelerated point based rendering with deferred shading. We perform experiments on several aerial LiDAR cities. Compared to visually-complete rendering [10], the presented framework is able to deliver comparable visual quality with less than 8% increase in pre-processing time and 2-5 times higher rendering frame-rates.
Information visualization techniques for exploring oil well trajectories in reservoir models BIBAFull-Text 145-150
  Sowmya Somanath; Sheelagh Carpendale; Ehud Sharlin; Mario Costa Sousa
We present a set of interactive 3D visualizations, designed to explore oil/gas reservoir simulation post-processing models. With these visualizations we aim to provide reservoir engineers with better access to the data within their 3D models. We provide techniques for exploring existing oil well trajectories, and for planning future wells, to assist in decision making. Our approach focuses on designing visualization techniques that present the necessary details using concepts from information visualization. We created three new visualization variations -- lollipop-up, information circles and path indicator, which present well trajectory specific information in different visual formats. Our paper describes these visualizations and discusses them in context of our exploratory evaluation.
ReCloud: semantics-based word cloud visualization of user reviews BIBAFull-Text 151-158
  Ji Wang; Jian Zhao; Sheng Guo; Chris North; Naren Ramakrishnan
User reviews, like those found on Yelp and Amazon, have become an important reference for decision making in daily life, for example, in dining, shopping and entertainment. However, large amounts of available reviews make the reading process tedious. Existing word cloud visualizations attempt to provide an overview. However their randomized layouts do not reveal content relationships to users. In this paper, we present ReCloud, a word cloud visualization of user reviews that arranges semantically related words as spatially proximal. We use a natural language processing technique called grammatical dependency parsing to create a semantic graph of review contents. Then, we apply a force-directed layout to the semantic graph, which generates a clustered layout of words by minimizing an energy model. Thus, ReCloud can provide users with more insight about the semantics and context of the review content. We also conducted an experiment to compare the efficiency of our method with two alternative review reading techniques: random layout word cloud and normal text-based reviews. The results showed that the proposed technique improves user performance and experience of understanding a large number of reviews.
Geo-topo maps: hybrid visualization of movement data over building floor plans and maps BIBAFull-Text 159-166
  Quentin Ventura; Michael J. McGuffin
We demonstrate how movements of multiple people or objects within a building can be displayed on a network representation of the building, where nodes are rooms and edges are doors. Our representation shows the direction of movements between rooms and the order in which rooms are visited, while avoiding occlusion or overplotting when there are repeated visits or multiple moving people or objects. We further propose the use of a hybrid visualization that mixes geospatial and topological (network-based) representations, enabling focus-in-context and multi-focal visualizations. An experimental comparison found that the topological representation was significantly faster than the purely geospatial representation for three out of four tasks.

Understanding users: inking, perception and adaptation

How low should we go?: understanding the perception of latency while inking BIBAFull-Text 167-174
  Michelle Annett; Albert Ng; Paul Dietz; Walter F. Bischof; Anoop Gupta
Recent advances in hardware have enabled researchers to study the perception of latency. Thus far, latency research has utilized simple touch and stylus-based tasks that do not represent inking activities found in the real world. In this work, we report on two studies that utilized writing and sketching tasks to understand the limits of human perception. Our studies revealed that latency perception while inking is worse (~50 milliseconds) than perception while performing non-inking tasks reported previously (~2-7 milliseconds). We also determined that latency perception is not based on the distance from the stylus' nib to the ink, but rather on the presence of a visual referent such as the hand or stylus. The prior and current work has informed the Latency Perception Model, a framework upon which latency knowledge and the underlying mechanisms of perception can be understood and further explored.
The effect of interior bezel presence and width on magnitude judgement BIBAFull-Text 175-182
  James R. Wallace; Daniel Vogel; Edward Lank
Large displays are often constructed by tiling multiple small displays, creating visual discontinuities from inner bezels that may affect human perception of data. Our work investigates how bezels impact magnitude judgement, a fundamental aspect of perception. Two studies are described which control for bezel presence, bezel width, and user-to-display distance. Our findings form three implications for the design of tiled displays. Bezels wider than 0.5cm introduce a 4-7% increase in judgement error from a distance, which we simplify to a 5% rule of thumb when assessing display hardware. Length judgements made at arm's length are most affected by wider bezels, and are an important use case to consider. At arm's length, bezel compensation techniques provide a limited benefit in terms of judgement accuracy.
User adaptation to a faulty unistroke-based text entry technique by switching to an alternative gesture set BIBAFull-Text 183-192
  Ahmed Sabbir Arif; Wolfgang Stuerzlinger
This article presents results of two user studies to investigate user adaptation to a faulty unistroke gesture recognizer of a text entry technique. The intent was to verify the hypothesis that users gradually adapt to a faulty gesture recognition technique's misrecognition errors and that this adaptation rate is dependent on how frequently they occur. Results confirmed that users gradually adapt to misrecognition errors by replacing the error prone gestures with alternative ones, as available. Also, users adapt to a particular misrecognition error faster if it occurs more frequently than others.
The pen is mightier: understanding stylus behaviour while inking on tablets BIBAFull-Text 193-200
  Michelle Annett; Fraser Anderson; Walter F. Bischof; Anoop Gupta
Although pens and paper are pervasive in the analog world, their digital counterparts, styli and tablets, have yet to achieve the same adoption and frequency of use. To date, little research has identified why inking experiences differ so greatly between analog and digital media or quantified the varied experiences that exist with stylus-enabled tablets. By observing quantitative and behavioural data in addition to querying preferential opinions, the experimentation reaffirmed the significance of accuracy, latency, and unintended touch, whilst uncovering the importance of friction, aesthetics, and stroke beautification to users. The observed participant behaviour and recommended tangible goals should enhance the development and evaluation of future systems.

Geometry, sketching, and BRDFs

Computation of polarized subsurface BRDF for rendering BIBAFull-Text 201-208
  Charly Collin; Sumanta Pattanaik; Patrick LiKamWa; Kadi Bouatouch
Interest in polarization properties of the rendered materials is growing, but so far discussions on polarization have been restricted only to surface reflection, and the reflection due to subsurface scattering is assumed to be unpolarized. Findings from other field (e.g. optics and atmospheric science) show that volumetric interaction of light can contribute to polarization. So we investigated the polarized nature of the radiance field due to subsurface scattering as a function of the thickness of the material layer for various types of materials. Though our computations shows negligible polarization for material layers of high thickness, thin layered materials show significant degree of polarization. That means polarization cannot be ignored for subsurface component of reflection from painted surfaces (particularly painted metal surfaces) or from coated materials. In this paper we employ the vector radiative transfer equation (VRTE), which is the polarized version of the radiative transfer equation inside the material. We use a discrete ordinate based method to solve the VRTE and compute the polarized radiance field at the surface of the material layer. We generate the polarimetric BRDF from the solutions of the VRTE for incident irradiance with different polarizations. We validate our VRTE solution against a benchmark and demonstrate our results through renderings using the computed BRDF.
Spectral global intrinsic symmetry invariant functions BIBAFull-Text 209-215
  Hui Wang; Patricio Simari; Zhixun Su; Hao Zhang
We introduce spectral Global Intrinsic Symmetry Invariant Functions (GISIFs), a class of GISIFs obtained via eigendecomposition of the Laplace-Beltrami operator on compact Riemannian manifolds, and provide associated theoretical analysis. We also discretize the spectral GISIFs for 2D manifolds approximated either by triangle meshes or point clouds. In contrast to GISIFs obtained from geodesic distances, our spectral GISIFs are robust to topological changes. Additionally, for symmetry analysis, our spectral GISIFs represent a more expressive and versatile class of functions than the classical Heat Kernel Signatures (HKSs) and Wave Kernel Signatures (WKSs). Finally, using our defined GISIFs on 2D manifolds, we propose a class of symmetry-factored embeddings and distances and apply them to the computation of symmetry orbits and symmetry-aware segmentations.
First person sketch-based terrain editing BIBAFull-Text 217-224
  Flora Ponjou Tasse; Arnaud Emilien; Marie-Paule Cani; Stefanie Hahmann; Adrien Bernhardt
We present a new method for first person sketch-based editing of terrain models. As in usual artistic pictures, the input sketch depicts complex silhouettes with cusps and T-junctions, which typically correspond to non-planar curves in 3D. After analysing depth constraints in the sketch based on perceptual cues, our method best matches the sketched silhouettes with silhouettes or ridges of the input terrain. A specific deformation algorithm is then applied to the terrain, enabling it to exactly match the sketch from the given perspective view, while insuring that none of the user-defined silhouettes is hidden by another part of the terrain. As our results show, this method enables users to easily personalize an existing terrain, while preserving its plausibility and style.
Coordinated particle systems for image stylization BIBAFull-Text 225-233
  Chujia Wei; David Mould
Our paper provides an approach to create line-drawing stylizations of input images. The main idea is to use particle tracing with interaction between nearby particles: the particles coordinate their movements so as to produce varied but roughly parallel traces. The particle density varies according to the tone in the input images, thereby expressing bright and dark areas. Using procedural distributions of particles, we can also generate smooth abstract patterns.