HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 010203040506070809101112131415

Proceedings of the 2011 Conference on Graphics Interface

Fullname:Proceedings of the 2011 Conference on Graphics Interface
Editors:Stephen Brooks; Pourang Irani
Location:St. John's, Newfoundland and Labrador, Canada
Dates:2011-May-25 to 2011-May-27
Publisher:Canadian Information Processing Society
Standard No:ISSN: 0713-5424; ISBN: 1-4503-0693-4, 978-1-4503-0693-5; ACM DL: Table of Contents hcibib: GI11
Links:Conference Home Page | Conference Series Home Page
  1. Animation
  2. Target acquisition & interaction
  3. Modeling
  4. Visualization interfaces
  5. Rendering
  6. Graph interaction
  7. Image compositing and ordering
  8. Video games


Formation sketching: an approach to stylize groups in crowd simulation BIBA 1-8
  Qin Gu; Zhigang Deng
Most of existing crowd simulation algorithms focus on the moving trajectories of individual agents, while collective group formations are often roughly learned from video examples or manually specified via various hard constraints (e.g., pre-defined keyframes of exact agent distributions). In this paper, we present an intuitive yet efficient approach to generate arbitrary and precise group formations by sketching formation boundaries. Our approach can automatically compute the desired position of each agent in the target formation and generate the agent correspondences between keyframes. When high-level group formations need to be formed on-the-fly in a dynamic environment such as "switching to the circle formation at about one hundred meters ahead", our algorithm will coordinate and compute appropriate actions for each agent by seamlessly fusing local formation dynamics and global group locomotion. Through a number of experiments, we demonstrate that our approach is efficient and adaptive to variations of group scales (i.e., number of agents), group positions, and environment obstacles.
A hybrid interpolation scheme for footprint-driven walking synthesis BIBA 9-16
  Ben J. H. van Basten; Sybren A. Stüvel; Arjan Egges
In many constrained environments, precise control of foot placement during character locomotion is crucial to avoid collisions and to ensure a natural locomotion. In this paper, we present a new exact motion synthesis technique that generates planar parameterized stepping motions based on a combination of rotational and Cartesian interpolation. Existing stepping motions are blended in a linearized representation to guarantee exact control of foot placement. By concatenating these parameterized steps, we can generate highly-constrained stepping animations in real-time. Furthermore, because of a novel blend candidates selection strategy, soft constraints such as stance duration and foot orientation are also taken into account. We will show that our technique can generate a variety of different stepping animations efficiently, even though we impose many constraints on the animation.
Physically based baking animations with smoothed particle hydrodynamics BIBA 17-24
  Omar Rodriguez-Arenas; Yee-Hong Yang
In this paper, we propose a new model for creating physically-based animations of the baking process. Our model is capable of reproducing the fluid-solid phase transition, volume expansion, and surface browning that take place during the baking process. Furthermore, an adaptive field function is presented that is able to reconstruct the surface of the baked good as its volume expands. The model is very flexible in that it can reproduce the mechanical properties of a wide array of fluids from thin fluids to semi-solids. The sequences presented show that the proposed model can produce animations of different and peculiar types of bread.
Mid-level smoke control for 2D animation BIBA 25-32
  Alfred Barnat; Zeyang Li; James McCann; Nancy S. Pollard
In this paper we introduce the notion that artists should be able to control fluid simulations by providing examples of expected local fluid behavior (for instance, an artist might specify that magical smoke often forms star shapes). As our idea fits between high-level, global pose control and low-level parameter adjustment, we deem it mid-level control. We make our notion concrete by demonstrating two mid-level controllers providing stylized smoke effects for two-dimensional animations. With these two controllers, we allow the artist to specify both density patterns, or particle motifs, which should emerge frequently within the fluid and global texture motifs to which the fluid should conform. Each controller is responsible for constructing a stylized version of the current fluid state, which we feed-back into a global pose control method. This feedback mechanism allows the smoke to retain fluid-like behavior, while also attaining a stylized appearance suitable to integration with 2D animations. We integrate these mid-level controls with an interactive animation system, in which the user can control and keyframe all animation parameters using an interactive timeline view.

Target acquisition & interaction

Target following performance in the presence of latency, jitter, and signal dropouts BIBA 33-40
  Andriy Pavlovych; Wolfgang Stuerzlinger
In this paper we describe how human target following performance changes in the presence of latency, latency variations, and signal dropouts. Many modern games and game systems allow for networked, remote participation. In such networks latency, variations and dropouts are commonly encountered factors.Our user study reveals that all of the investigated factors decrease tracking performance. The errors increase very quickly for latencies of over 110 ms, for latency jitters above 40 ms, and for dropout rates of more than 10%. The effects of target velocity on errors are close to linear, and transverse errors are smaller than longitudinal ones. The results can be used to better quantify the effects of different factors on moving objects in interactive scenarios. They also aid the designers in selecting target sizes and velocities, as well as in adjusting smoothing, prediction and compensation algorithms.
Pop-up depth views for improving 3D target acquisition BIBA 41-48
  Guangyu Wang; Michael J. McGuffin; François Bérard; Jeremy R. Cooperstock
We present the design and experimental evaluation of pop-up depth views, a novel interaction technique for aiding in the placement or positioning of a 3D cursor or object. Previous work found that in a 3D placement task, a 2D mouse used with multiple orthographic views outperformed a 3D input device used with a perspective view with stereo. This was the case, even though the mouse required two clicks to complete the task instead of only the single click required with the 3D input device. We improve performance with 3D input devices with pop-up depth views, small inset views in a perspective display of the scene. These provide top- and side-views of the immediate 3D neighborhood of the cursor, thereby allowing the user to see more easily along the depth dimension, improving the user's effective depth acuity. In turn, positioning with the 3D input device is also improved. Furthermore, because the depth views are displayed near the 3D cursor, only tiny eye movements are required for the user to perceive the 3D cursor's depth with respect to nearby objects. Pop-up depth views are a kind of depth view, only displayed when the user's cursor slows down. In this manner, they do not occlude the 3D scene when the user is moving quickly. Our experimental evaluation shows that the combination of a 3D input device used with a perspective view, stereo projection, and pop-up depth views, outperforms a 2D mouse in a 3D target acquisition task, in terms of both movement time and throughput, but at the cost of a slightly higher error rate.
3D sketching using interactive fabric for tangible and bimanual input BIBA 49-56
  Anamary Leal; Doug Bowman; Laurel Schaefer; Francis Quek; Clarissa "K" Stiles
As an input device, fabric holds potential benefits for three dimensional (3D) interaction in the domain of surface design, which includes designing objects from clothing to metalwork. To investigate these benefits, we conducted an exploratory study of different users' natural interactions with fabric. During this study, we instructed users to communicate various shapes and surfaces of varying complexity. A prevailing way of communicating shapes proved to be an in-the-air sketch metaphor. Based on this result, we proposed and implemented a system supporting three in-the-air sketch-based input devices: a point, a flexible curve, and a flexible surface. A preliminary feasibility study found that users successfully sketched objects and scenes despite the influence of tracking issues, suggesting lessons learned and relevant constraints for such systems in future work.
2D similarity transformations on multi-touch surfaces BIBA 57-64
  Behrooz Ashtiani; Wolfgang Stuerzlinger
We present and comparatively evaluate two new object transformation techniques for multi-touch surfaces. Specifying complete two-dimensional similarity transformations requires a minimum of four degrees of freedom: two for position, one for rotation, and another for scaling. Many existing techniques for object transformation are designed to function with traditional input devices such as mice, single-touch surfaces, or stylus pens. The challenge is to map controls appropriately for each of these devices. A few multi-touch techniques have been proposed in the past, but no comprehensive evaluation has been presented.
   XNT is a new three-finger object transformation technique, designed for multi-touch surfaces. It provides a natural interface for two-dimensional manipulation. XNT and several existing techniques were evaluated in a user study. The results show that XNT is superior for all tasks that involve scaling and competitive for tasks that involve only rotation and positioning.
Syntherella: a feedback synthesizer for efficient exploration of virtual worlds using a screen reader BIBA 65-70
  Bugra Oktay; Eelke Folmer
Thanks to the recent efforts, virtual worlds are, at the very least, partially accessible to users with visual impairments. However, generating effective non-visual descriptions for these environments is still a challenge. Sighted users can process an entire scene with many objects in an instant but screen reader users may easily get overwhelmed by this great amount of information when it is transformed into linear speech feedback. Additionally, user studies from our previous work show that iteratively querying the environment for detailed descriptions slows down the interaction significantly.
   Syntherella is a feedback synthesizer that aims to provide meaningful, effective yet concise textual representations of the virtual worlds while minimizing the required number of queries for accessing information.


Occlusion tiling BIBA 71-78
  Dorian Gomez; Pierre Poulin; Mathias Paulin
The creation of realistic, complex, and diversified virtual worlds is of utmost importance for video games. Unfortunately the amount of time required to create 3D scene contents can be extremely tedious to graphic artists. While procedural modeling can alleviate this task, it has mostly been developed for specific contexts.
   In this paper, we study tiling for synthetic worlds, taking into account visibility between tiles. We propose a method, Occlusion Tiling, that precomputes full 2D occlusion caused by tiles in order to ensure that a limited number of tiles can be visible from any viewpoint on the tiling. These tiles are then used as extruded 3D scenes, thus bounding the number of polygons sent to the graphics rendering pipeline for guaranteed throughput.
Approximative occlusion culling using the hull tree BIBA 79-86
  Tim Süß; Clemens Koch; Claudius Jähn; Matthias Fischer
Occlusion culling is a common approach to accelerate real-time rendering of polygonal 3D-scenes by reducing the rendering load. Especially for large scenes, it is necessary to remove occluded objects to achieve a frame rate that provides an interactive environment. In order to benefit from the culling properly, often hierarchical data structures are used. These data structures typically create a spatial subdivision of a given scene into axis-aligned bounding boxes. These boxes can be tested quickly, but they are not very accurate. By using these boxes, the included objects are detected as visible, even if other objects occlude them (false-positives). To get perfect results, the models' original geometry included in the box has to be tested, but this would require too much computational power. To overcome this problem, original objects' approximations could be used, but typical methods for mesh simplification cannot be applied, because they do not create an outer hull for a given object.
   We present a model simplification algorithm, which generates simple outer hulls, consisting of only few more triangles than a box, while preserving an object's shape better than a corresponding bounding box. This approach is then extended to a hierarchical data structure, the so-called hull tree, that can be generated for a given scene to improve the visibility tests.
   Next, we present an approximative rendering algorithm, which combines the features of the hull tree with the use of inner hulls for efficient occlusion detection and global state-sorting of the visible objects.
Component-based modeling of complete buildings BIBA 87-94
  Luc Leblanc; Jocelyn Houle; Pierre Poulin
We present a system to procedurally generate complex models with interdependent elements. Our system relies on the concept of components to spatially and semantically define various elements. Through a series of successive statements executed on a subset of components selected with queries, we grow a tree of components ultimately defining a model.
   We apply our concept and representation of components to the generation of complete buildings, with coherent interior and exterior. It proves general and well adapted to support subdivision of volumes, insertion of openings, embedding of staircases, decoration of façades and walls, layout of furniture, and various other operations required when constructing a complete building.
Data structures for interactive high resolution level-set surface editing BIBA 95-102
  Manolya Eyiyurekli; David E. Breen
This paper presents data structures that enable interactive editing of large-scale level-set surface models. The new approach utilizes spatial hashing to store a narrow band of voxels around the level-set interface, as well as a k-d tree to hold the model's display points that lie on the surface itself. This sparse representation of voxels and surface points lets us create and modify high resolution levelset models with modest memory requirements, while allowing fast data access/modifications and interactive graphics updates. The data structures also support out-of-the-box editing, i.e. no bounding box limits the surface editing region, a restriction common when utilizing 3-D arrays. We formally define the level-set representation and demonstrate its interactive performance and scalability through manipulation of high-resolution level-set surface models.

Visualization interfaces

Visual encodings that support physical navigation on large displays BIBA 103-110
  Alex Endert; Christopher Andrews; Yueh Hua Lee; Chris North
Visual encodings are the medium through which information is displayed, perceived, interpreted, and finally transferred from a visualization to the user. Traditionally, such encodings display information as representations of length, color, size, slope, position, and other glyphs. Guidelines for such encodings have been proposed, but they generally assume a small display, small datasets, and a relatively static user. Large, high-resolution visualizations are able to display far more information simultaneously, allowing users to leverage physical navigation (movement) as an effective interaction through which to explore the data space. In this paper, we analyze if and how the choice of visual encodings for large, high-resolution visualizations affects physical navigation, and ultimately task performance for a spatial information visualization task.
AppMap: exploring user interface visualizations BIBA 111-118
  Michael Rooke; Tovi Grossman; George Fitzmaurice
In traditional graphical user interfaces, the majority of UI elements are hidden to the user in the default view, as application designers and users desire more space for their application data. We explore the benefits of dedicating additional screen space for presenting an alternative visualization of an application's user interface. Some potential benefits are to assist users in examining complex software, understanding the extent of an application's capabilities, and exploring the available features. We propose user interface visualizations, alternative representations of an application's interface augmented with usage information. We introduce a design space for UI visualizations and describe some initial prototypes and insights based on this design space. We then present AppMap, our new design, which displays the entire function set of AutoCAD and allows the user to interactively explore the visualization which is augmented with visual overlays displaying analytical data about the functions and their relations. In our initial studies, users welcomed this new presentation of functionality, and the unique information that it presents.
Towards ideal window layouts for multi-party, gaze-aware desktop videoconferencing BIBA 119-126
  Sasa Junuzovic; Kori Inkpen; Rajesh Hegde; Zhengyou Zhang
In high-end desktop videoconferencing systems, several windows compete for screen space, particularly when users also share an application. Ideally, the layout of these windows should satisfy both (a) layout guidelines for establishing a rich communication channel and (b) user preferences for window layouts. This paper presents an exploration of user preferences and their interplay with previously established window layout guidelines. Based on results from two user studies, we have created five recommendations for user-preferred window layouts in high-end desktop videoconferencing systems. Both designers and end-users can use these recommendations to setup "ideal" layouts, that is, layouts that satisfy both user preferences and existing layout guidelines. For instance, we have developed an application that utilizes the recommendations to guide users towards ideal layouts during a videoconference.
Structure-preserving stippling by priority-based error diffusion BIBA 127-134
  Hua Li; David Mould
This paper presents a new fast, automatic method for structure-aware stippling. The core idea is to concentrate on structure preservation by using a priority-based scheme that treats extremal stipples first and preferentially assigns positive error to lighter stipples and negative error to darker stipples, emphasizing contrast. We also use a nonlinear spatial function to shrink or exaggerate errors and thus implicitly provide global adjustment of density. Our adjustment respects contrast and hence allows us to preserve structure even with very low stipple budgets. We also explore a variety of stylization effects, including screening and scratchboard, all within the unifying framework of stippling.
Ubiquitous cursor: a comparison of direct and indirect pointing feedback in multi-display environments BIBA 135-142
  Robert Xiao; Miguel A. Nacenta; Regan L. Mandryk; Andy Cockburn; Carl Gutwin
Multi-display environments (MDEs) connect several displays into a single digital workspace. One of the main problems to be solved in an MDE's design is how to enable movement of objects from one display to another. When the real-world space between displays is modeled as part of the workspace (i.e., Mouse Ether), it becomes difficult for users to keep track of their cursors during a transition between displays. To address this problem, we developed the Ubiquitous Cursor system, which uses a projector and a hemispherical mirror to completely cover the interior of a room with usable low-resolution pixels. Ubiquitous Cursor allows us to provide direct feedback about the location of the cursor between displays. To assess the effectiveness of this direct-feedback approach, we carried out a study that compared Ubiquitous Cursor with two other standard approaches: Halos, which provide indirect feedback about the cursor's location; and Stitching, which warps the cursor between displays, similar to the way that current operating systems address multiple monitors. Our study tested simple cross-display pointing tasks in an MDE; the results showed that Ubiquitous Cursor was significantly faster than both other approaches. Our work shows the feasibility and the value of providing direct feedback for cross-display movement, and adds to our understanding of the principles underlying targeting performance in MDEs.


Implicit and dynamic trees for high performance rendering BIBA 143-150
  Nathan Andrysco; Xavier Tricoche
Recent advances in GPU architecture and programmability have enabled the computation of ray casted or ray traced images at interactive frame rates. However, the rapid performance gains of the hardware cannot by themselves address the challenge posed by the steady growth in the geometric and temporal complexity of computer graphics datasets. In this paper we present a novel versatile tree data structure that can accommodate both sparse and dense data sets and is more memory efficient than state-of-the-art representations. A key feature of our data structure for rendering applications is that it fully supports efficient, parallel building. As a result, our implicit tree representation significantly outperforms existing techniques in the rendering of time-varying scenes. We show how this data structure can be extended to encode other classic representations such as BSP-trees and we discuss the high-performance implementation of our general approach on the GPU.
A mathematical framework for efficient closed-form single scattering BIBA 151-158
  Vincent Pegoraro; Mathias Schott; Philipp Slusallek
Analytic approaches to efficiently simulate accurate light transport in homogeneous participating media have recently received attention in the graphics community, and although a closed-form solution to the single-scattering air-light integral has been derived for a generic representation of 1-D angular distributions, its high order of computational complexity alas limits its practical applicability. In this paper, we introduce alternative algebraic formulations of the solution that entirely preserve its closed-form nature while effectively reducing its order of complexity. The analytic derivations yield a significant decrease in the computational cost of the evaluation scheme, and the substantial gains in performance achieved by the method make high-quality light transport simulation considerably more applicable to both real-time and off-line rendering.
Sample-space bright spots removal using density estimation BIBA 159-166
  Anthony Pajot; Loïc Barthe; Mathias Paulin
Rendering images using Monte-Carlo estimation is prone to bright spots artefacts. Bright spots correspond to high intensity pixels that appear when a very low probability sample outweighs all other sample contributions. We present an average estimator that is robust to outliers, which detects and removes samples that are considered as outliers, and lead to bright spots in images computed using Monte-Carlo estimation. By progressively building a per-pixel representation of the luminance distribution, our method is able to delay samples whose luminance is considered as an outlier with respect to the current distribution. This distribution is continuously updated so that delayed samples may be re-considered as viable later in the rendering process, thus making the presented approach robust. Our method does not suffer from blurring in high-frequency zones. It can be easily integrated in any Monte-Carlo-based rendering system, used in conjunction with any adaptive sampling scheme, and it introduces a very small computational overhead, which is negligible compared to the use of over-sampling.
Render-time procedural per-pixel geometry generation BIBA 167-174
  Jean-Eudes Marvie; Pascal Gautron; Patrice Hirtzlin; Gael Sourimant
We introduce procedural geometry mapping and ray-dependent grammar development for fast and scalable render-time generation of procedural geometric details on graphics hardware. By leveraging the properties of the widely used split grammars, we replace geometry generation by lazy per-pixel grammar development. This approach drastically reduces the memory costs while implicitly concentrating the computations on objects spanning large areas in image space. Starting with a building footprint, the bounding volume of each facade is projected towards the viewer. For each pixel we lazily develop the grammar describing the facade and intersect the potentially visible split rules and terminal shapes. Further geometric details are added using normal and relief mapping in terminal space. Our approach also supports the computation of per-pixel self shadowing on facades for high visual quality. We demonstrate interactive performance even when generating and tuning large cityscapes comprising thousands of facades. The method is generalized to arbitrary mesh-based shapes to provide full artistic control over the generation of the procedural elements, making it also usable outside the context of urban modeling.

Graph interaction

Improving revisitation in graphs through static spatial features BIBA 175-182
  Sohaib Ghani; Niklas Elmqvist
People generally remember locations in visual spaces with respect to spatial features and landmarks. Geographical maps provide many spatial features and hence are easy to remember. However, graphs are often visualized as node-link diagrams with few spatial features. We evaluate whether adding static spatial features to node-link diagrams will help in graph revisitation. We discuss three strategies for embellishing a graph and evaluate each in a user study. In our first study, we evaluate how to best add background features to a graph. In the second, we encode position using node size and color. In the third and final study, we take the best techniques from the first and second study, as well as shapes added to the graph as virtual landmarks, to find the best combination of spatial features for graph revisitation. We discuss the user study results and give our recommendations for design of graph visualization software.
The effect of animation, dual view, difference layers, and relative re-layout in hierarchical diagram differencing BIBA 183-190
  Loutfouz Zaman; Ashish Kalra; Wolfgang Stuerzlinger
We present a new system for visualizing and merging differences in diagrams that uses animation, dual views, a storyboard, relative re-layout, and layering. We ran two user studies investigating the benefits of the system. The first user study compared pairs of hierarchical diagrams with matching node positions. The results underscore that naïve dual-view visualization is undesirable. On the positive side, participants particularly liked the dual-view with difference layer technique. The second user study focused on diagrams with partially varying node positions and difference visualization and animation. We found evidence that both techniques are beneficial, and that the combination was preferred.

Image compositing and ordering

Edge-constrained image compositing BIBA 191-198
  Martin Eisemann; Daniel Gohlke; Marcus Magnor
The classic task of image compositing is complicated by the fact that the source and target images need to be carefully aligned and adjusted. Otherwise, it is not possible to achieve convincing results. Visual artifacts are caused by image intensity mismatch, image distortion or structure misalignment even if the images have been globally aligned. In this paper we extend classic Poisson blending by a constrained structure deformation and propagation method. This approach can solve the above-mentioned problems and proves useful for a variety of applications, e.g. in de-ghosting of mosaic images, classic image compositing or other applications such as super-resolution from image databases. Our method is based on the following basic steps. First, an optimal partitioning boundary is computed between the input images. Then, features along this boundary are robustly aligned and deformation vectors are computed. Starting at these features, salient edges are traced and aligned, serving as additional constraints for the smooth deformation field, which is propagated robustly and smoothly into the interior of the target image. If very different images are to be stitched, we propose to base the deformation constraints on the curvature of the salient edges for C1-continuity of the structures between the images.
Data organization and visualization using self-sorting map BIBA 199-206
  Grant Strong; Minglun Gong
This paper presents the Self-Sorting Map (SSM), a novel algorithm for organizing and visualizing data. Given a set of data items and a dissimilarity measure between each pair of them, the SSM places each item into a unique cell of a structured layout, where the most related items are placed together and the unrelated ones are spread apart. The algorithm nicely integrates ideas from dimension reduction techniques, sorting algorithms, and data clustering approaches. Instead of solving the continuous optimizing problem as other dimension reduction approaches do, the SSM transforms it into a discrete labeling problem. As a result, it can organize a set of data into a structured layout without overlapping, providing a simple and intuitive presentation. Experiments on different types of data show that the SSM can be applied to a variety of applications, ranging from visualizing semantic relatedness between articles to organizing image search results based on visual similarities. Our current SSM implementation using Java is fast enough for interactively organizing datasets with hundreds of entries.

Video games

Effects of view, input device, and track width on video game driving BIBA 207-214
  Scott Bateman; Andre Doucette; Robert Xiao; Carl Gutwin; Regan L. Mandryk; Andy Cockburn
Steering and driving tasks -- where the user controls a vehicle or other object along a path -- are common in many simulations and games. Racing video games have provided users with different views of the visual environment -- e.g., overhead, first-person, and third-person views. Although research has been done in understanding how people perform using a first-person view in virtual reality and driving simulators, little empirical work has been done to understand the factors that affect performance in video games. To establish a foundation for thinking about view in the design of driving games and simulations, we carried out three studies that explored the effects of different view types on driving performance. We also considered how view interacts with difficulty and input device. We found that although there were significant effects of view on performance, these were not in line with conventional wisdom about view. Our explorations provide designers with new empirical knowledge about view and performance, but also raise a number of new research questions about the principles underlying view differences.
Investigating communication and social practices in real-time strategy games: are in-game tools sufficient to support the overall gaming experience? BIBA 215-222
  Phillip J. McClelland; Simon J. Whitmell; Stacey D. Scott
This paper discusses the social and strategic communication patterns observed during gameplay of the real-time strategy game, StarCraft II. An observational study was conducted over three weeks during which approximately 26 game matches and the social procedures by which players organized themselves and selected game options were observed. Study participants were members of a pre-existing network of friends and had adopted the Skype voice communication tool to support the game client's built-in collaboration and social networking solutions. The players were observed playing in situations of varying levels of collaboration ranging from team matches to free-for-all matches, and many forms of communication, including both strategic and social, were observed. The study findings revealed that players prefer communication tools that provide both robustness and flexibility. Preferred tools increase ease of access to other players, introduce a measure of exception handling to unify the gameplay experience, and make use of the game as a virtual watercooler -- a hub which can facilitate much off-topic, yet valued, conversation.
Pet-N-Punch: upper body tactile/audio exergame to engage children with visual impairments into physical activity BIBA 223-230
  Tony Morelli; John Foley; Lauren Lieberman; Eelke Folmer
Individuals with visual impairments have significantly higher levels of obesity and often exhibit delays in motor development, caused by a general lack of opportunities to be physically active. Tactile/audio based exergames that only involve motions of the dominant arm have been successfully explored to engage individuals with visual impairments into physical activity. This paper presents an accessible exergame called Pet-N-Punch that can be played using one or two arms. A user study with 12 children who were blind showed that they were able to achieve light to moderate physical activity, but no significant difference in energy expenditure was detected between both versions. The two arm version had a significantly higher error rate than the one arm version, which shows that the two arm version has a significantly higher cognitive load. Players were found to be able to respond to tactile/audio cues within 2500ms.