HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 00010203040506070809101112131415

Proceedings of the 2010 Conference on Graphics Interface

Fullname:Proceedings of the 2010 Conference on Graphics Interface
Editors:David Mould; Sylvie Noël
Location:Ottawa, Ontario, Canada
Dates:2010-May-31 to 2010-Jun-02
Publisher:Canadian Information Processing Society
Standard No:ISSN: 0713-5424; ISBN: 1-56881-712-6, 978-1-56881-712-5; ACM DL: Table of Contents hcibib: GI10
Papers:36
Pages:276
Links:Conference Home Page | Conference Series Home Page
  1. Input and interaction
  2. Computer-supported cooperative work
  3. Photo zoom
  4. Background estimation
  5. Visualization
  6. Best student papers
  7. Rendering and visibility
  8. Virtual and augmented reality
  9. Modeling
  10. Navigation
  11. Gestures and pointing

Input and interaction

Exploring temporal patterns with information visualization: keynote BIBAFull-Text 1-2
  Catherine Plaisant
After an overview of visualizations to explore temporal patterns, we will focus on interfaces for discovering temporal event patterns in electronic health records. Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Our novel interactive search strategies allow for aligning records on important events, ranking, and filtering combined with grouping of results to find common or rare events. A second approach is to use query-by-example, in which users specify a pattern and see a similarity-ranked list of results, but the similarity measure needs to be customized for different needs. Temporal summaries allow comparisons between groups. We will discuss the methods we use to evaluate the usefulness of our interfaces through collaborations with clinicians and hospital administrators on case studies. Finally, application of the techniques to other domains will be discussed.
Graphically enhanced keyboard accelerators for GUIs BIBAFull-Text 3-10
  Jeff Hendy; Kellogg S. Booth; Joanna McGrenere
We introduce GEKA, a graphically enhanced keyboard accelerator method that provides the advantages of a traditional command line interface within a GUI environment, thus avoiding the "Fitts-induced bottleneck" of pointer movement that is characteristic of most WIMP methods. Our design rationale and prototype development were derived from a small formative user study, which suggested that advanced users would like alternatives to WIMP methods in GUIs. The results of a controlled experiment show that GEKA performs well, is faster than menu selection, and is strongly preferred over all mouse-based WIMP methods.
Characterizing large-scale use of a direct manipulation application in the wild BIBAFull-Text 11-18
  Benjamin Lafreniere; Andrea Bunt; John S. Whissell; Charles L. A. Clarke; Michael Terry
Examining large-scale, long-term application use is critical to understanding how an application meets the needs of its user community. However, there have been few published analyses of long-term use of desktop applications, and none that have examined applications that support creating and modifying content using direct manipulation. In this paper, we present an analysis of 2 years of usage data from an instrumented version of the GNU Image Manipulation Program, including data from over 200 users. In the course of our analysis, we show that previous findings concerning the sparseness of command use and idiosyncrasy of users' command vocabularies extend to a new domain and interaction style. These findings motivate continued research in adaptive and mixed-initiative interfaces. We also describe the novel application of a clustering technique to characterize a user community's higher-level tasks from low-level logging data.
Multi-modal text entry and selection on a mobile device BIBAFull-Text 19-26
  David Dearman; Amy Karlson; Brian Meyers; Ben Bederson
Rich text tasks are increasingly common on mobile devices, requiring the user to interleave typing and selection to produce the text and formatting she desires. However, mobile devices are a rich input space where input does not need to be limited to a keyboard and touch. In this paper, we present two complementary studies evaluating four different input modalities to perform selection in support of text entry on a mobile device. The modalities are: screen touch (Touch), device tilt (Tilt), voice recognition (Speech), and foot tap (Foot). The results show that Tilt is the fastest method for making a selection, but that Touch allows for the highest overall text throughput. The Tilt and Foot methods -- although fast -- resulted in users performing and subsequently correcting a high number of text entry errors, whereas the number of errors for Touch is significantly lower. Users experienced significant difficulty when using Tilt and Foot in coordinating the format selections in parallel with the text entry. This difficulty resulted in more errors and therefore lower text throughput. Touching the screen to perform a selection is slower than tilting the device or tapping the foot, but the action of moving the fingers off the keyboard to make a selection ensured high precision when interleaving selection and text entry. Additionally, mobile devices offer a breadth of promising rich input methods that need to be careful studied in situ when deciding if each is appropriate to support a given task; it is not sufficient to study the modalities independent of a natural task.
A new interface for cloning objects in drawing systems BIBAFull-Text 27-34
  Loutfouz Zaman; Wolfgang Stuerzlinger
Cloning objects is a common operation in graphical user interfaces. One example is calendar systems, where users commonly create and modify recurring events, i.e. repeated clones of a single event. Inspired by the calendar paradigm, we introduce a new cloning technique for 2D drawing programs. This technique allows users to clone objects by first selecting them and then dragging them to create clones along the dragged path. Moreover, it allows editing the generated sequences of clones similar to the editing of calendar events. Novel approaches for the generation of clones of clones are also presented.
   We compared our new clone creation technique with generic duplication via copy-and-paste, smart duplication, and a dialog driven technique on a standard desktop system. The results show that the new cloning method is always faster than dialogs and smart duplication for most conditions. We also compared our clone editing method against rectangular selection. The results show that our method is better in general. In situations where rectangle selection is effective, our method is still competitive. Participants preferred the new techniques overall, too.
A comparison of techniques for in-place toolbars BIBAFull-Text 35-38
  Andre Doucette; Carl Gutwin; Regan L. Mandryk
Selections are often carried out using toolbars that are located far away from the location of the cursor. To reduce the time to make these selections, researchers have proposed in-place toolbars such as Toolglasses or popup palettes. Even though in-place toolbars have been known for a long time, there are factors influencing their performance that have not been investigated. To explore the subtleties of different designs for in-place toolbars, we implemented and compared three approaches: warping the cursor to the toolbar, having the toolbar pop up over the cursor, and showing the toolbar on the trackpad itself to allow direct touch. Our study showed that all three new techniques were faster than traditional static toolbars, but also uncovered important differences between the three in-place versions. Participants spent significantly less time in the direct-touch trackpad, and warping the cursor's location caused a time-consuming attentional shift. These results provide a better understanding of how small changes to in-place toolbar techniques can affect performance.

Computer-supported cooperative work

Translation by iterative collaboration between monolingual users BIBAFull-Text 39-46
  Chang Hu; Benjamin B. Bederson; Philip Resnik
In this paper we describe a new iterative translation process designed to leverage the massive number of online users who have minimal or no bilingual skill. The iterative process is supported by combining existing machine translation methods with monolingual human speakers. We have built a Web-based prototype that is capable of yielding high quality translations at much lower cost than traditional professional translators. Preliminary evaluation results of this prototype confirm the validity of the approach.
Automatic camera control using unobtrusive vision and audio tracking BIBAFull-Text 47-54
  Abhishek Ranjan; Jeremy Birnholtz; Ravin Balakrishnan; Dana Lee
While video can be useful for remotely attending and archiving meetings, the video itself is often dull and difficult to watch. One key reason for this is that, except in very high-end systems, little attention has been paid to the production quality of the video being captured. The video stream from a meeting often lacks detail and camera shots rarely change unless a person is tasked with operating the camera. This stands in stark contrast to live television, where a professional director creates engaging video by juggling multiple cameras to provide a variety of interesting views. In this paper, we applied lessons from television production to the problem of using automated camera control and selection to improve the production quality of meeting video. In an extensible and robust approach, our system uses off-the-shelf cameras and microphones to unobtrusively track the location and activity of meeting participants, control three cameras, and cut between these to create video with a variety of shots and views, in real-time. Evaluation by users and independent coders suggests promising initial results and directions for future work.
Awareness beyond the desktop: exploring attention and distraction with a projected peripheral-vision display BIBAFull-Text 55-62
  Jeremy Birnholtz; Lindsay Reynolds; Eli Luxenberg; Carl Gutwin; Maryam Mustafa
The initiation of interaction in face-to-face settings is often a gradual negotiation process that takes place in a rich context of awareness and social signals. This gradual approach to interaction is missing from most online messaging systems, however, and users often have no idea when others are paying attention to them or when they are about to be interrupted. One reason for this limitation is that few systems have considered the role of peripheral perception in attracting and directing interpersonal attention in face-to-face interaction. We believed that a display exploiting people's peripheral vision could capitalize on natural human attention-management behavior. To test the value of this technique, we compared a peripheral-vision awareness display with an on-screen IM-style system. We expected that people would notice more information from the larger peripheral display, which they did. Moreover, they did so while attending less often to the peripheral display. Our study suggests that peripheral-vision awareness displays may be able to improve attention and awareness management for distributed groups.
Users' (mis)conceptions of social applications BIBAFull-Text 63-70
  Andrew Besmer; Heather Richter Lipford
Many social network sites, such as Facebook and MySpace, feature social applications, applications and services written by third party developers that provide additional functionality linked to a user's profile. Current platforms allow these applications to consume much of a user's profile information, as well as the profile information of the user's friends. Researchers are proposing mechanisms to reduce the risks of this data sharing, yet these efforts need to be informed with an understanding of application use and impressions. This paper examines users' motivations, intentions, and concerns with using applications, as well as their perceptions of data sharing. Our results indicate that the social interaction driving application use is also leading to a lack of awareness of data sharing, its risks, and its implications.

Photo zoom

Photo zoom: high resolution from unordered image collections BIBAFull-Text 71-78
  Martin Eisemann; Elmar Eisemann; Hans-Peter Seidel; Marcus Magnor
We present a system to automatically construct high resolution images from an unordered set of low resolution photos. It consists of an automatic preprocessing step to establish correspondences between any given photos. The user may then choose one image and the algorithm automatically creates a higher resolution result, several octaves larger up to the desired resolution. Our recursive creation scheme allows to transfer specific details at subpixel positions of the original image. It adds plausible details to regions not covered by any of the input images and eases the acquisition for large scale panoramas spanning different resolution levels.
Interactive content-aware zooming BIBAFull-Text 79-87
  Pierre-Yves Laffont; Jong Yun Jun; Christian Wolf; Yu-Wing Tai; Khalid Idrissi; George Drettakis; Sung-eui Yoon
We propose a novel, interactive content-aware zooming operator that allows effective and efficient visualization of high resolution images on small screens, which may have different aspect ratios compared to the input images. Our approach applies an image retargeting method in order to fit an entire image into the limited screen space. This can provide global, but approximate views for lower zoom levels. However, as we zoom more closely into the image, we continuously unroll the distortion to provide local, but more detailed and accurate views for higher zoom levels. In addition, we propose to use an adaptive view-dependent mesh to achieve high retargeting quality, while maintaining interactive performance. We demonstrate the effectiveness of the proposed operator by comparing it against the traditional zooming approach, and a method stemming from a direct combination of existing works.

Background estimation

Real-time video matting using multichannel Poisson equations BIBAFull-Text 89-96
  Minglun Gong; Liang Wang; Ruigang Yang; Yee-Hong Yang
This paper presents a novel matting algorithm for processing video sequences in real-time and online. The algorithm is based on a set of novel Poisson equations that are derived for handling multichannel color vectors, as well as the depth information captured. A simple yet effective approach is also proposed to compute an initial alpha matte in the color space. Real-time processing speed is achieved through optimizing the algorithm for parallel processing on the GPUs. To process live video sequences online and autonomously, a modified background cut algorithm is implemented to separate foreground and background, the result of which guides the automatic trimap generation. Quantitative evaluation on still images shows that the alpha mattes extracted using the presented algorithm is much more accurate than the ones obtained using the global Poisson matting algorithm and are comparable to that of other state-of-the-art offline image matting techniques.
Background estimation using graph cuts and inpainting BIBAFull-Text 97-103
  Xida Chen; Yufeng Shen; Yee Hong Yang
In this paper, we propose a new method, which requires no interactive operation, to estimate background from an image sequence with occluding objects. The images are taken from the same viewpoint under similar illumination conditions. Our method combines the information from input images by selecting the appropriate pixels to construct the background. We have two simple assumptions for the input image sequence: each background pixel has to be disclosed at least once and some parts of the background are never occluded. We propose a cost function that includes a data term and a smoothness term. A unique feature of our data term is that it has not only the stationary term, but also a new predicted term obtained using an image inpainting technique. The smoothness term guarantees that the output is visually smooth so that there is no need for post-processing. The cost is minimized by applying graph cuts optimization. We apply our algorithm to several complex natural scenes as well as to an image sequence with different camera exposure settings, and the results are encouraging.

Visualization

Interactive visualization and navigation of web search results revealing community structures and bridges BIBAFull-Text 105-112
  Arnaud Sallaberry; Faraz Zaidi; Christian Pich; Guy Melançon
With the information overload on the Internet, organization and visualization of web search results so as to facilitate faster access to information is a necessity. The classical methods present search results as an ordered list of web pages ranked in terms of relevance to the searched topic. Users thus have to scan text snippets or navigate through various pages before finding the required information. In this paper we present an interactive visualization system for content analysis of web search results. The system combines a number of algorithms to present a novel layout methodology which helps users to analyze and navigate through a collection of web pages. We have tested this system with a number of data sets and have found it very useful for the exploration of data. Different case studies are presented based on searching different topics on Wikipedia through Exalead's search engine.
Interactive searching and visualization of patterns in attributed graphs BIBAFull-Text 113-120
  Pierre-Yves Koenig; Faraz Zaidi; Daniel Archambault
Searching for patterns in graphs and visualizing the search results is an active area of research with numerous applications. With the continual growth of database size, querying these databases often results in multiple solutions. Text-based systems present search results as a list, and going over all solutions can be tedious. In this paper, we present an interactive visualization system that helps users find patterns in graphs and visualizes the search results. The user draws a source pattern and labels it with attributes. Based on these attributes and connectivity constraints, simplified subgraphs are generated, containing all the possible solutions. The system is quite generic and capable of searching patterns and approximate solutions in a variety of data sets.
Improving interaction models for generating and managing alternative ideas during early design work BIBAFull-Text 121-128
  Brittany N. Smith; Anbang Xu; Brian P. Bailey
A principle of early design work is to generate and manage multiple ideas, as this fosters creative insight. As computer tools are being increasingly used for early design work, it is critical to understand how their interaction models affect idea management. This paper reports results of a user study comparing how the use of three interaction models -- tab interfaces, layered canvases, and spatial maps -- affects working with multiple ideas. Designers (N=18) created and managed ideas for realistic design tasks using each model. We observed strategies for creating and managing ideas, measured process outcomes and tool interactions, and gained extensive participant feedback. From the results, we derive new lessons that can be broadly applied to improve how interfaces support multiple ideas and implemented the lessons within one model to demonstrate their efficacy.

Best student papers

Visual links across applications BIBAFull-Text 129-136
  Manuela Waldner; Werner Puff; Alexander Lex; Marc Streit; Dieter Schmalstieg
The tasks carried out by modern information workers become increasingly complex and time-consuming. They often require to evaluate, interpret, and compare information from different sources presented in multiple application windows. With large, high resolution displays, multiple application windows can be arranged in a way so that a large amount of information is visible simultaneously. However, individual application windows' contents and visual representations are isolated and relations between information items contained in these windows are not explicit. Thus, relating and comparing information across applications has to be executed manually by the user, which is a tedious and error-prone task.
   In this paper we present visual links connecting related pieces of information across application windows and thereby guiding the user's attention to relevant information. Applications are coordinated by a management application accessible via a light-weight interface. User selections are synchronized across registered applications and visual links are rendered on top of the desktop content by a window manager. Initial user feedback was very positive and indicates that visual links improve task efficiency when analyzing information from multiple sources.
Interactive illustrative visualization of hierarchical volume data BIBAFull-Text 137-144
  Jean-Paul Balabanian; Ivan Viola; Eduard Gröller
In scientific visualization the underlying data often has an inherent abstract and hierarchical structure. Therefore, the same dataset can simultaneously be studied with respect to its characteristics in the three-dimensional space and in the hierarchy space. Often both characteristics are equally important to convey. For such scenarios we explore the combination of hierarchy visualization and scientific visualization, where both data spaces are effectively integrated. We have been inspired by illustrations of species evolutions where hierarchical information is often present. Motivated by these traditional illustrations, we introduce integrated visualizations for hierarchically organized volumetric datasets. The hierarchy data is displayed as a graph, whose nodes are visually augmented to depict the corresponding 3D information. These augmentations include images due to volume raycasting, slicing of 3D structures, and indicators of structure visibility from occlusion testing. New interaction metaphors are presented that extend visualizations and interactions, typical for one visualization space, to control visualization parameters of the other space. Interaction on a node in the hierarchy influences visual representations of 3D structures and vice versa. We integrate both the abstract and the scientific visualizations into one view which avoids frequent refocusing typical for interaction with linked-view layouts. We demonstrate our approach on different volumetric datasets enhanced with hierarchical information.

Rendering and visibility

Two-level ray tracing with reordering for highly complex scenes BIBAFull-Text 145-152
  Johannes Hanika; Alexander Keller; Hendrik P. A. Lensch
We introduce a ray tracing scheme, which is able to handle highly complex geometry modeled by the classic approach of surface patches tessellated to micro-polygons, where the number of micropolygons can exceed the available memory. Two techniques allow us to carry out global illumination computations in such scenes and to trace the resulting incoherent sets of rays efficiently. For one, we rely on a bottom-up technique for building the bounding volume hierarchy (BVH) over tessellated patches in time linear in the number of micro-polygons. Second, we present a highly parallel two-stage ray tracing algorithm, which minimizes the number of tessellation steps by reordering rays. The technique can accelerate rendering scenes that would result in billions of micro-polygons and efficiently handles complex shading operations.
Memory efficient ray tracing with hierarchical mesh quantization BIBAFull-Text 153-160
  Benjamin Segovia; Manfred Ernst
We present a lossily compressed acceleration structure for ray tracing that encodes the bounding volume hierarchy (BVH) and the triangles of a scene together in a single unified data structure. Total memory consumption of our representation is smaller than previous comparable methods by a factor of 1.7 to 4.8, and it achieves performance similar to the fastest uncompressed data structures. We store quantized vertex positions as local offsets to the leaf bounding box planes and encode them in bit strings. Triangle connectivity is represented as a sequence of strips inside the leaf nodes. The BVH is stored in a compact quantized format. We describe techniques for efficient implementation using register SIMD instructions (SSE). Hierarchical mesh quantization (HMQ) with 16 bits of accuracy achieves an average compression rate of 5.7: 1 in comparison to a BVH and an indexed face set. The performance impact is only 11 percent for packet tracing and 17 percent for single ray path tracing on average.
Hybrid rendering of dynamic heightfields using ray-casting and mesh rasterization BIBAFull-Text 161-168
  Lucas Ammann; Olivier Génevaux; Jean-Michel Dischler
This paper presents a flexible hybrid method designed to render heightfield data, such as terrains, on GPU. It combines two traditional techniques, namely mesh-based rendering and per-pixel ray-casting. A heuristic is proposed to dynamically choose between these two techniques. To balance rendering performance against quality, an adaptive mechanism is introduced that depends on viewing conditions and heightfield characteristics. It manages the precision of the ray-casting rendering, while mesh rendering is reserved for the finest level of details. Our method is GPU accelerated and achieves real-time rendering performance with high accuracy. Moreover, contrary to most terrains rendering methods, our technique does not rely on time-consuming pre-processing steps to update complex data structures. As a consequence, it gracefully handles dynamic heightfields, making it useful for interactive terrain edition or real-time simulation processes.
Frontier sets in large terrains BIBAFull-Text 169-176
  Shachar Avni; James Stewart
In current online games, player positions are synchronized by means of continual broadcasts through the server. This solution is expensive, forcing any server to limit its number of clients. With a hybrid networking architecture, player synchronization can be distributed to the clients, bypassing the server bottleneck and decreasing latency as a result. Synchronization in a decentralized fashion is difficult as each player must communicate with every other player. The communication requirements can be reduced by computing and exploiting frontier sets: For a pair of players in an online game, their frontier sets consist of the region of the game space in which each player may move without seeing (and without communicating to) the other player. This paper describes the first fast and space-efficient method of computing frontier sets in large terrains.

Virtual and augmented reality

Visuohaptic borescope inspection simulation training: modeling multi-point collision detection/response and evaluating skills transfer BIBAFull-Text 177-184
  Deepak Vembar; Andrew Duchowski; Melissa Paul; Anand Gramopadhye; Carl Washburn
Results are presented from a transfer effects study of a visuohaptic borescope simulator developed for non-destructive aircraft inspection training. The borescope simulator supports multi-point collision detection to effect haptic feedback as the virtual probe slides along and collides with rigid surfaces. Such probe maneuvering is shown to be a significant aspect of the inspection task that benefits from training, regardless of whether a real or virtual probe simulator is used to provide the training.
Whale Tank Virtual Reality BIBAFull-Text 185-192
  Evgeny Maksakov; Kellogg S. Booth; Kirstie Hawkey
Whale Tank Virtual Reality (VR) is a novel head-coupled VR technique for collocated collaboration. It allows multiple users to observe a 3D scene from the correct perspective through their own personal viewport into the virtual scene and to interact with the scene on a large touch screen display. There are two primary benefits to Whale Tank VR: 1) Head coupling allows a user to experience the sense of a third dimension and to observe difficult-to-see objects without requiring navigation beyond natural head movement. 2) Multiple viewports enable collocated collaboration by seamlessly adjusting the head-coupled perspectives in each viewport according to the proximity of collaborators to ensure a consistent display at all times. One potential disadvantage that we had to consider was that head-coupling might reduce awareness of a collocated coworker's actions in the 3D scene. We therefore conducted an experiment to study the influence of head coupling on users' awareness-and-recall of actions in a simulated collaborative situation for several levels of task difficulty. Results revealed no statistically significant difference in awareness-and-recall performance with or without the presence of head coupling. This suggests that in situations where head coupling is employed, there is no degradation in users' awareness of collocated activity.
Techniques for view transition in multi-camera outdoor environments BIBAFull-Text 193-200
  Eduardo Veas; Alessandro Mulloni; Ernst Kruijff; Holger Regenbrecht; Dieter Schmalstieg
Environment monitoring using multiple observation cameras is increasingly popular. Different techniques exist to visualize the incoming video streams, but only few evaluations are available to find the best suitable one for a given task and context. This article compares three techniques for browsing video feeds from cameras that are located around the user in an unstructured manner. The techniques allow mobile users to gain extra information about the surroundings, the objects and the actors in the environment by observing a site from different perspectives. The techniques relate local and remote cameras topologically, via a tunnel, or via bird's eye viewpoint. Their common goal is to enhance spatial awareness of the viewer, without relying on a model or previous knowledge of the environment. We introduce several factors of spatial awareness inherent to multi-camera systems, and present a comparative evaluation of the proposed techniques with respect to spatial understanding and workload.
Seek-n-Tag: a game for labeling and classifying virtual world objects BIBAFull-Text 201-208
  Bei Yuan; Manjari Sapre; Eelke Folmer
We identified that virtual worlds that rely on user generated content often lack accurate metadata for their objects. The apparent lack of metadata is a problem for users who are visually impaired, as they rely upon textual descriptions of objects to be present for accessing virtual worlds using assistive technology, such as a screen reader or tactile display. This paper presents a scavenger-hunt game for the virtual world of Second Life -- called SEEK-N-TAG -- that allows sighted users to label objects as well as collaboratively develop a taxonomy for objects. SEEK-N-TAG aims to build a set of objects with accurate metadata that can be used as training data for an automatic object classifier. Our approach is novel due to its internal approach where the game is implemented in the virtual world itself as to improve its own accessibility. A user study with 10 participants revealed that labeling objects with a game is more effective and accurate than manually naming objects.

Modeling

Crafting 3D faces using free form portrait sketching and plausible texture inference BIBAFull-Text 209-216
  Tanasai Sucontphunt; Borom Tunwattanapong; Zhigang Deng; Ulrich Neumann
In this paper we propose a sketch-based interface for drawing and generating a realistic 3D human face with texture. The free form sketch-based interface allows a user to intuitively draw a portrait as in traditional pencil sketching way. Then, the user's drawing is automatically reshaped to an accurate natural human facial shape with the guidance of a statistical description model, and an artistic-style portrait rendering technique is used to render the work-in-progress face sketch. Furthermore, with additional user-specified information, e.g., gender, ethnicity, and skin tone, a realistic face texture can be synthesized for the portrait through our probabilistic face texture inference model. Lastly, the textured portrait will be further used to construct a realistic 3D face model by the 3D morphable face model algorithm. Through our preliminary user evaluations, we found that with this system, even novice users were able to efficiently craft a sound 3D realistic face within three minutes.
Component-based model synthesis for low polygonal models BIBAFull-Text 217-224
  Nicolas Maréchal; Éric Galin; Éric Guérin; Samir Akkouche
This paper presents a method for semi-automatically generating a variety of different objects from an initial low polygonal model. Our approach aims at generating large sets of models with small variants with a view to avoiding instance replications which produce unrealistic repetitive patterns. The generation process consists in decomposing the initial object into a set of components. Their geometry and texture are edited and the modified components are then combined together to create a large set of varying models. Our method has been implemented in the Twilight 2 development framework of Eden Games and Widescreen Games and successfully experimented on different types of models.
Image-assisted modeling from sketches BIBAFull-Text 225-232
  Luke Olsen; Faramarz F. Samavati
In this paper, we propose a method for creating freeform surfaces from sketch-annotated images. Beginning from an image, the user sketches object boundaries, features, and holes. Sketching is made easier by a magnetic pen that follows strong edges in the image. To create a surface from the sketch, a planar mesh is constructed such that its geometry aligns with the boundary and interior features. We then inflate to 3D using a discrete distance transform filtered through a cross-sectional mapping function. Finally, the input image is applied as a texture to the surface. The benefits of our framework are demonstrated with examples in modeling both freeform and manufactured objects.

Navigation

Anchored navigation: coupling panning operation with zooming and tilting based on the anchor point on a map BIBAFull-Text 233-240
  Kazuyuki Fujita; Yuichi Itoh; Kazuki Takashima; Yoshifumi Kitamura; Takayuki Tsukitani; Fumio Kishino
We propose two novel map navigation techniques, called Anchored Zoom (AZ) and Anchored Zoom and Tilt (AZT). In these techniques, the zooming and tilting of a virtual camera are automatically coupled with users' panning displacements so that the anchor point determined by users always remains in a viewport. This allows users to manipulate a viewport without mode-switching among pan, zoom, and tilt while maintaining a sense of distance and direction from the anchor point.
   We conducted an experiment to evaluate AZ and AZT and compare them with Pan & Zoom (PZ) [17] and Speed-dependent Automatic Zooming (SDAZ) [10] in off-screen target acquisition tasks and spatial recognition tests. Results showed that our proposed techniques were more effective than those of competitors in reducing time to reach off-screen objects while maintaining users' sense of distance and direction as well as PZ.
TouchMark: flexible document navigation and bookmarking techniques for e-book readers BIBAFull-Text 241-244
  Doug Wightman; Tim Ginn; Roel Vertegaal
We present TouchMark, a set of page navigation techniques that preserve some of the physical affordances of paper books. TouchMark introduces physical tabs, one on each side of the display, to enable gestures such as page thumbing and bookmarking. TouchMark can be implemented on a variety of electronic devices, including tablet computers and laptops, by augmenting standard hardware with inexpensive sensors.

Gestures and pointing

A lightweight multistroke recognizer for user interface prototypes BIBAFull-Text 245-252
  Lisa Anthony; Jacob O. Wobbrock
With the expansion of pen- and touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multistroke gesture recognizers is nontrivial and time-consuming. Luckily, user interface prototypes often do not require state-of-the-art recognizers that are general and maintainable, due to the simpler nature of most user interface gestures. To enable easy incorporation of multistroke recognition in user interface prototypes, we present $N, a lightweight, concise multistroke recognizer that uses only simple geometry and trigonometry. A full pseudocode listing is given as an appendix.
   $N is a significant extension to the $1 unistroke recognizer, which has seen quick uptake in prototypes but has key limitations. $N goes further by (1) recognizing gestures comprising multiple strokes, (2) automatically generalizing from one multistroke to all possible multistrokes using alternative stroke orders and directions, (3) recognizing one-dimensional gestures such as lines, and (4) providing bounded rotation invariance. In addition, $N uses two speed optimizations, one with start angles that saves 79.1% of comparisons and increases accuracy 1.3%. The other, which is optional, compares multistroke templates and candidates only if they have the same number of strokes, reducing comparisons further to 89.5% and increasing accuracy another 1.7%. These results are taken from our study of algebra symbols entered in situ by middle and high schoolers using a math tutor prototype, on which $N was 96.6% accurate with 15 templates.
Design and evaluation of interaction models for multi-touch mice BIBAFull-Text 253-260
  Hrvoje Benko; Shahram Izadi; Andrew D. Wilson; Xiang Cao; Dan Rosenfeld; Ken Hinckley
Adding multi-touch sensing to the surface of a mouse has the potential to substantially increase the number of interactions available to the user. However, harnessing this increased bandwidth is challenging, since the user must perform multi-touch interactions while holding the device and using it as a regular mouse. In this paper we describe the design challenges and formalize the design space of multi-touch mice interactions. From our design space categories we synthesize four interaction models which enable the use of both multi-touch and mouse interactions on the same device. We describe the results of a controlled user experiment evaluating the performance of these models in a 2D spatial manipulation task typical of touch-based interfaces and compare them to interacting directly on a multi-touch screen and with a regular mouse. We observed that our multi-touch mouse interactions were overall slower than the chosen baselines; however, techniques providing a single focus of interaction and explicit touch activation yielded better performance and higher preferences from our participants. Our results expose the difficulties in designing multi-touch mice interactions and define the problem space for future research in making these devices effective.
Understanding users' preferences for surface gestures BIBAFull-Text 261-268
  Meredith Ringel Morris; Jacob O. Wobbrock; Andrew D. Wilson
We compare two gesture sets for interactive surfaces -- a set of gestures created by an end-user elicitation method and a set of gestures authored by three HCI researchers. Twenty-two participants who were blind to the gestures' authorship evaluated 81 gestures presented and performed on a Microsoft Surface. Our findings indicate that participants preferred gestures authored by larger groups of people, such as those created by end-user elicitation methodologies or those proposed by more than one researcher. This preference pattern seems to arise in part because the HCI researchers proposed more physically and conceptually complex gestures than end-users. We discuss our findings in detail, including the implications for surface gesture design.
A comparison of ray pointing techniques for very large displays BIBAFull-Text 269-276
  Ricardo Jota; Miguel A. Nacenta; Joaquim A. Jorge; Sheelagh Carpendale; Saul Greenberg
Ray-pointing techniques are often advocated as a way for people to interact with very large displays from several meters away. We are interested in two factors that can affect ray pointing: the particular technique's control type, and parallax.
   Consequently, we tested four ray pointing variants on a wall display that covers a large part of the user's field of view. Tasks included horizontal and vertical targeting, and tracing. Our results show that (a) techniques based on 'rotational control' perform better for targeting tasks, and (b) techniques with low parallax are best for tracing tasks. We also show that a Fitts's law analysis based on angles (as opposed to linear distances) better approximates people's ray pointing performance.