HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 14151617181920212223242526272829

Human-Computer Interaction 24

Editors:Thomas P. Moran
Dates:2009
Volume:24
Publisher:Taylor and Francis Group
Standard No:ISSN 0737-0024
Papers:11
Links:Table of Contents
  1. HCI 2009 Volume 24 Issue 1/2
  2. HCI 2009 Volume 24 Issue 3
  3. HCI 2009 Volume 24 Issue 4

HCI 2009 Volume 24 Issue 1/2

Introduction to this Special Issue on Ubiquitous Multi-Display Environments BIBFull-Text 1-8
  Ravin Balakrishnan; Patrick Baudisch
Toward the Digital Design Studio: Large Display Explorations BIBAFull-Text 9-47
  Azam Khan; Justin Matejka; George Fitzmaurice; Gord Kurtenbach; Nicolas Burtnyk; Bill Buxton
Inspired by our automotive and product design customers using large displays in design centers, visualization studios, and meeting rooms around the world, we have been exploring the use and potential of large display installations for almost a decade. Our research has touched on many aspects of this rich design space, from individual tools to complete systems, and has generally moved through the life cycle of a design artifact: from the creation phase, through communication and collaboration, to presentation and dissemination. As we attempt to preserve creative flow through the phases, we introduce social structures and constraints that drive the design of possible point solutions in the larger context of a digital design studio trail environment built in the lab. Although many of the interactions presented are viable across several design phases, this article focuses primarily on facilitating collaboration. We conclude with critical lessons learned of both what avenues have been fruitful and which roads to avoid. This article lightly covers the whole design process and attempts to inform readers of key factors to consider when designing for designers.
The Social Life of Information Displays: How Screens Shape Psychological Responses in Social Contexts BIBAFull-Text 48-78
  Erica Robles; Clifford Nass; Adam Kahn
This article presents the results of two experimental laboratory studies that establish relationships between displays and people's attitudes, beliefs, and behaviors toward self, others, and social situations. Experiment I investigates how participants (N = 40) engaging in a trivia game respond when their answers and performance feedback evaluations are made public via either a large shared display or each person's laptop display. Using a 2 (answer display: shared vs. personal) × 2 (feedback display: shared vs. personal) between-participants, nested design, we find that participants exhibit differential levels of social anxiety, enjoyment, willingness to change answers, and attributions of coparticipant competence. Participants whose answers are shown on the shared display exhibit greater social anxiety but are attributed with greater competence by their peers. Viewing information on the shared display induces a greater degree of change in answers. Precisely because all information is public throughout the experiment, we are able to isolate the effects of sharing screens as opposed to sharing information. Experiment II (N = 40) builds from Experiment I by employing similar display configurations within an explicitly persuasive context. In a 2 (display: shared vs. personal) × 2 (context: common vs. personal) × 2 (content presentation style: common vs. interpersonal), mixed experimental design we produce systematic differences in the persuasiveness of information, people's engagement with content, and sense of social distance from each other. Through both experiments strong consistency effects are evident: enjoyment, engagement, and persuasiveness are all diminished where incongruencies are part of the experimental conditions. So too these mismatches increase the sense of social distance from others in the setting. We discuss the implications for future research and design of display ecologies and situated media.
Equal Opportunities: Do Shareable Interfaces Promote More Group Participation Than Single User Displays? BIBAFull-Text 79-116
  Yvonne Rogers; Youn-kyung Lim; William R. Hazlewood; Paul Marshall
Computers designed for single use are often appropriated suboptimally when used by small colocated groups working together. Our research investigates whether shareable interfaces -- that are designed for more than one user to interact with-can facilitate more equitable participation in colocated group settings compared with single user displays. We present a conceptual framework that characterizes Shared Information Spaces (SISs) in terms of how they constrain and invite participation using different entry points. An experiment was conducted that compared three different SISs: a physical-digital set-up (least constrained), a multitouch tabletop (medium), and a laptop display (most constrained). Statistical analyses showed there to be little difference in participation levels between the three conditions other than a predictable lack of equity of control over the interface in the laptop condition. However, detailed qualitative analyses revealed more equitable participation took place in the physical-digital condition in terms of verbal utterances over time. Those who spoke the least contributed most to the physical design task. The findings are discussed in relation to the conceptual framework and, more generally, in terms of how to select, design, and combine different display technologies to support collaborative activities.
Synchronous Gestures in Multi-Display Environments BIBAFull-Text 117-169
  Gonzalo Ramos; Kenneth Hinckley; Andy Wilson; Raman Sarin
Synchronous gestures are patterns of sensed user or users' activity, spanning a distributed system that take on a new meaning when they occur together in time. Synchronous gestures draw inspiration from real-world social rituals such as toasting by tapping two drinking glasses together. In this article, we explore several interactions based on synchronous gestures, including bumping devices together, drawing corresponding pen gestures on touch-sensitive displays, simultaneously pressing a button on multiple smart-phones, or placing one or more devices on the sensing surface of a tabletop computer. These interactions focus on wireless composition of physically colocated devices, where users perceive one another and coordinate their actions through social protocol. We demonstrate how synchronous gestures may be phrased together with surrounding interactions. Such connection-action phrases afford a rich syntax of cross-device commands, operands, and one-to-one or one-to-many associations with a flexible physical arrangement of devices.
   Synchronous gestures enable colocated users to combine multiple devices into a heterogeneous display environment, where the users may establish a transient network connection with other select colocated users to facilitate the pooling of input capabilities, display resources, and the digital contents of each device. For example, participants at a meeting may bring mobile devices including tablet computers, PDAs, and smart-phones, and the meeting room infrastructure may include fixed interactive displays, such as a tabletop computer. Our techniques facilitate creation of an ad hoc display environment for tasks such as viewing a large document across multiple devices, presenting information to another user, or offering files to others. The interactions necessary to establish such ad hoc display environments must be rapid and minimally demanding of attention: during face-to-face communication, a pause of even 5 sec is socially awkward and disrupts collaboration.
   Current devices may associate using a direct transport such as Infrared Data Association ports, or the emerging Near Field Communication standard. However, such transports can only support one-to-one associations between devices and require close physical proximity as well as a specific relative orientation to connect the devices (e.g., the devices may be linked when touching head-to-head but not side-to-side). By contrast, sociology research in proxemics (the study of how people use the "personal space" surrounding their bodies) demonstrates that people carefully select physical distance as well as relative body orientation to suit the task, mood, and social relationship with other persons. Wireless networking can free device-to-device connections from the limitations of direct transports but results in a potentially large number of candidate devices. Synchronous gestures address these problems by allowing users to express naturally a spontaneous wireless connection between specific proximal (collocated) interactive displays.
There and Back Again: Cross-Display Object Movement in Multi-Display Environments BIBAFull-Text 170-229
  Miguel A. Nacenta; Carl Gutwin; Dzmitry Aliakseyeu; Sriram Subramanian
Multi-display environments (MDEs) are now becoming common, and are becoming more complex, with more displays and more types of display in the environment. One crucial requirement specific to MDEs is that users must be able to move objects from one display to another; this cross-display movement is a frequent and fundamental part of interaction in any application that spans two or more display surfaces. Although many cross-display movement techniques exist, the differences between MDEs -- the number, location, and mixed orientation of displays, and the characteristics of the task they are being designed for -- require that interaction techniques be chosen carefully to match the constraints of the particular environment.
   As a way to facilitate interaction design in MDEs, we present a taxonomy that classifies cross-display object movement techniques according to three dimensions: the referential domain that determines how displays are selected, the relationship of the input space to the display configuration, and the control paradigm for executing the movement. These dimensions are based on a descriptive model of the task of cross-display object movement.
   The taxonomy also provides an analysis of current research that designers and researchers can use to understand the differences between categories of interaction techniques.
Shaping the Display of the Future: The Effects of Display Size and Curvature on User Performance and Insights BIBAFull-Text 230-272
  Lauren Shupp; Christopher Andrews; Margaret Dickey-Kurdziolek; Beth Yost; Chris North
As display technology continues to improve, there will be an increasing diversity in the available display form factors and scales. Empirical evaluation of how display attributes affect user perceptions and performance can help designers understand the strengths and weaknesses of different display forms, provide guidance for effectively designing multiple display environments, and offer initial evidence for developing theories of ubiquitous display. Although previous research has shown user performance benefits when tiling multiple monitors to increase the number of pixels, little research has analyzed the performance and behavioral impacts of the form factors of much larger, high-resolution displays. This article presents two experiments in which user performance was evaluated on a high-resolution (96 DPI), high pixel-count (approximately 32 million pixels) display for single-user scenarios in both flat and curved forms. We show that for geospatial visual analytics tasks there is a benefit to larger displays, and a distinct advantage to curving the display to make all portions of the display more accessible to the user. In addition, we found that changing the form factor of the display does have an impact on user perceptions that will have to be considered as new display environments are developed.

HCI 2009 Volume 24 Issue 3

A Predictive Model of Human Performance With Scrolling and Hierarchical Lists BIBAFull-Text 273-314
  Andy Cockburn; Carl Gutwin
Many interactive tasks in graphical user interfaces involve finding an item in a list but with the item not currently in sight. The two main ways of bringing the item into view are scrolling of one-dimensional lists and expansion of a level in a hierarchical list. Examples include selecting items in hierarchical menus and navigating through "tree" browsers to find files, folders, commands, or e-mail messages. System designers are often responsible for the structure and layout of these components, yet prior research provides conflicting results on how different structures and layouts affect user performance. For example, empirical research disagrees on whether the time to acquire targets in a scrolling list increases linearly or logarithmically with the length of the list; similarly, experiments have produced conflicting results for the comparative efficacy of "broad and shallow" versus "narrow and deep" hierarchical structures. In this article we continue in the human-computer interaction tradition of bringing theory to the debate, demonstrating that prior results regarding scrolling and hierarchical navigation are theoretically predictable and that the divergent results can be explained by the impact of the dataset's organization and the user's familiarity with the dataset. We argue and demonstrate that when users can anticipate the location of items in the list, the time to acquire them is best modeled by functions that are logarithmic with list length and that linear models arise when anticipation cannot be used. We then propose a formal model of item selection from hierarchical lists, which we validate by comparing its predictions with empirical data from prior studies and from our own. The model also accounts for the transition from novice to expert behavior with different datasets.
Toolkit Support for Integrating Physical and Digital Interactions BIBAFull-Text 315-366
  Scott R. Klemmer; James A. Landay
There is great potential in enabling users to interact with digital information by integrating it with everyday physical objects. However, developing these interfaces requires programmers to acquire and abstract physical input. This is difficult, is time-consuming, and requires a high level of technical expertise in fields very different from user interface development -- especially in the case of computer vision. Based on structured interviews with researchers, a literature review, and our own experience building physical interfaces, we created Papier-Mâché, a toolkit for integrating physical and digital interactions. Its library supports computer vision, electronic tags, and barcodes. Papier-Mâché introduces high-level abstractions for working with these input technologies that facilitate technology portability. We evaluated this toolkit through a laboratory study and longitudinal use in course and research projects, finding the input abstractions, technology portability, and monitoring facilities to be highly effective.

HCI 2009 Volume 24 Issue 4

Interaction Unit Analysis: A New Interaction Design Framework BIBAFull-Text 367-407
  Hokyoung Ryu; Andrew Monk
A pragmatic approach to interaction modeling is presented by which a designer can describe how the user gets tasks done with a newly developing system. The notation proposed allows an interaction designer to make explicit both how user actions cause visible or noticeable changes in the state of the machine and how the user is expected to use this feedback to generate the next action. Interaction Unit (IU) scenarios are constructed where each IU specifies one step in the cycle of interaction. Each IU specifies the visible system state that leads the user to take some action. In addition, the IU makes explicit the state of the goal stack at the start and end of the unit and the mental processes (recall, recognition, or affordance) required. In this way one can describe the intimate connection between goal, action, and the environment in user–machine interaction.
   To demonstrate the completeness of IU scenario analysis, IU models are presented for some well-known problems in interaction design: hidden and partially hidden modes leading to unexpected system effects, insufficient cues for subgoal construction, insufficient cues for subgoal elimination, and inappropriate affordances for action. These scenarios are accompanied by procedures that designers can use to detect similar problems in putative interaction designs.
   To demonstrate the feasibility of using IU scenario analysis in design, 4 graduate students were taught to use IU scenario analysis in a 3-hr session. They then worked as a group to evaluate a prototype handheld warehouse application. A comparable group was taught and then applied Cognitive Walkthrough. Both groups successfully completed the task and detected several problems rated as being of high severity by the designers of the prototype. Analysis of the problems detected by each group suggests that the two techniques are complimentary. IU scenario analysis may be most cost-effective for devices using new interaction paradigms, whereas Cognitive Walkthrough may be most cost-effective for designs using established interaction paradigms.
Designing Internet-Based Payment Systems: Guidelines and Empirical Basis BIBAFull-Text 408-443
  Dennis Abrazhevich; Panos Markopoulos; Matthias Rauterberg
This article describes research into online electronic payment systems, focusing on the aspects of payment systems that are critical for their acceptance by end users. Based on our earlier research and a diary study of payments with an online payment system and with online banking systems of a reputable bank, we proposed a set of 12 interaction design guidelines. The guidelines have been applied during the implementation and redesign of a new payment system. An extensive experimental comparison of the original version of the system with the one designed according to the design guidelines has provided confirmation of the relevance and adequacy of these guidelines for designing online payment systems.