HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 151617181920212223242526272829

Human-Computer Interaction 25

Editors:Thomas P. Moran
Dates:2010
Volume:25
Publisher:Taylor and Francis Group
Standard No:ISSN 0737-0024
Papers:11
Links:Table of Contents
  1. HCI 2010 Volume 25 Issue 1
  2. HCI 2010 Volume 25 Issue 2
  3. HCI 2010 Volume 25 Issue 3
  4. HCI 2010 Volume 25 Issue 4

HCI 2010 Volume 25 Issue 1

Privacy, Trust, and Self-Disclosure Online BIBAFull-Text 1-24
  Adam N. Joinson; Ulf-Dietrich Reips; Tom Buchanan; Carina B. Paine Schofield
Despite increased concern about the privacy threat posed by new technology and the Internet, there is relatively little evidence that people's privacy concerns translate to privacy-enhancing behaviors while online. In Study 1, measures of privacy concern are collected, followed 6 weeks later by a request for intrusive personal information alongside measures of trust in the requestor and perceived privacy related to the specific request (n = 759). Participants' dispositional privacy concerns, as well as their level of trust in the requestor and perceived privacy during the interaction, predicted whether they acceded to the request for personal information, although the impact of perceived privacy was mediated by trust. In Study 2, privacy and trust were experimentally manipulated and disclosure measured (n = 180). The results indicated that privacy and trust at a situational level interact such that high trust compensates for low privacy, and vice versa. Implications for understanding the links between privacy attitudes, trust, design, and actual behavior are discussed.
The Importance of Active Exploration, Optical Flow, and Task Alignment for Spatial Learning in Desktop 3D Environments BIBAFull-Text 25-66
  Barney Dalgarno; Sue Bennett; Barry Harper
Arguments for the use of interactive 3D simulations in education and training depend to a large extent on an implicit assumption that a more accurate and complete spatial cognitive model can be formed through active user-controlled exploration of such an environment than from viewing an equivalent animation. There is a similar implicit assumption that the viewing of animated view changes provides advantages over the viewing of static images due to the value of optical flow. The results to date, however, do not clearly support these assumptions. In particular, the findings of Peruch, Vercher, and Gauthier (1995) and Christou and Bulthoff (1999) conflict in relation to the importance of active exploration and of optical flow. This article reports the results of two studies exploring the importance of active exploration and of optical flow for spatial learning in 3D environments. The results indicate that active exploration can provide greater spatial learning than viewing of animations, but only if there is an alignment between the task goals during this exploration and the spatial learning being tested. In addition, the results suggest that a set of well-chosen static views of the environment can in some cases allow the formation of as complete a spatial cognitive model as a set of animated views. The article concludes with an analysis of the methodologies used by Peruch et al. and by Christou and Bulthoff in light of the findings reported here, leading to a new explanation for their conflicting results.
Human-Computer Interface Issues in Controlling Virtual Reality With Brain-Computer Interface BIBAFull-Text 67-94
  Doron Friedman; Robert Leeb; Gert Pfurtscheller; Mel Slater
We have integrated the Graz brain-computer interface (BCI) system with a highly immersive virtual reality (VR) Cave-like system. This setting allows for a new type of experience, whereby participants can control a virtual world using imagery of movement. However, current BCI systems still have many limitations. In this article we present two experiments exploring the different constraints posed by current BCI systems when used in VR. In the first experiment we let the participants make free choices during the experience and compare their BCI performance with participants using BCI without free choice; this is unlike most previous work in this area, in which participants are requested to obey cues. In the second experiment we allowed participants to control a virtual body with motor imagery. We provide both quantitative and subjective results, regarding both BCI accuracy and the nature of the subjective experience in this new type of setting.

HCI 2010 Volume 25 Issue 2

Concept-Driven Interaction Design Research BIBAFull-Text 95-118
  Erik Stolterman; Mikael Wiberg
In this article, we explore a concept-driven approach to interaction design research with a specific focus on theoretical advancements. We introduce this approach as a complementary approach to more traditional, and well-known, user-centered interaction design approaches. A concept-driven approach aims at manifesting theoretical concepts in concrete designs. A good concept design is both conceptually and historically grounded, bearing signs of the intended theoretical considerations. In the area of human-computer interaction and interaction design research, this approach has been quite popular but not necessarily explicitly recognized and developed as a proper research methodology. In this article, we demonstrate how a concept-driven approach can coexist, and be integrated with, common user-centered approaches to interaction design through the development of a model that makes explicit the existing cycle of prototyping, theory development, and user studies. We also present a set of basic principles that could constitute a foundation for concept driven interaction research, and we have considered and described the methodological implications given these principles. For the field of interaction design research we find this as an important point of departure for taking the next step toward the construction and verification of theoretical constructs that can help inform and guide future design research projects on novel interaction technologies.
Inferring Cross-Sections: When Internal Visualizations Are More Important Than Properties of External Visualizations BIBAFull-Text 119-147
  Peter Khooshabeh; Mary Hegarty
Three experiments examined how cognitive abilities and qualities of external visualizations affected performance of a mental visualization task; inferring the cross-section of a complex three-dimensional object. Experiment 1 investigated the effect of animations designed to provide different task-relevant views of the external object. Experiment 2 examined the effects of both stereoscopic and motion-based depth cues. Experiment 3 examined the effects of interactive animations, with and without stereoscopic viewing conditions. In all experiments, spatial and general reasoning abilities were measured. Effects of animation, stereopsis, and interactivity were relatively small and did not reach statistical significance. In contrast, spatial ability was significantly associated with superior performance in all experiments, and this remained true after controlling for general intelligence. The results indicate that difficulties in this task stem more from the cognitive ability to perform the relevant internal spatial transformations, than limited visual information about the three-dimensional structure of the object.
Environment Analysis as a Basis for Designing Multimodal and Multidevice User Interfaces BIBAFull-Text 148-193
  Sami Ronkainen; Emilia Koskinen; Ying Liu; Panu Korhonen
In this article, we present a practical approach to analyzing mobile usage environments. We propose a framework for analyzing the restrictions that characteristics of different environments pose on the user's capabilities. These restrictions along with current user interfaces form the cost of interaction in a certain environment. Our framework aims to illustrate that cost and what causes it. The framework presents a way to map features of the environment to the effects they cause on the resources of the user and in some cases on the mobile device. This information can be used for guiding the design of adaptive and/or multimodal user interfaces or devices optimized for certain usage environments. An example of using the framework is presented along with some major findings and three examples of applying them in user interface design.

HCI 2010 Volume 25 Issue 3

NDM-Based Cognitive Agents for Supporting Decision-Making Teams BIBAFull-Text 195-234
  Xiaocong Fan; Michael McNeese; John Yen
Naturalistic decision making (NDM) focuses on how people actually make decisions in realistic settings that typically involve ill-structured problems. Taking an experimental approach, we investigate the impacts of using an NDM-based software agent (R-CAST) on the performance of human decision-making teams in a simulated C3I (Communications, Command, Control and Intelligence) environment. We examined four types of decision-making teams with mixed human and agent members playing the roles of intelligence collection and command selection. The experiment also involved two within-group control variables: task complexity and context switching frequency. The result indicates that the use of an R-CAST agent in intelligence collection allows its team member to consider the latest situational information in decision making but might increase the team member's cognitive load. It also indicates that a human member playing the role of command selection should not rely too much on the agent serving as his or her decision aid. Together, it is suggested that the roles of both humans and cognitive agents are critical for achieving the best possible performance of C3I decision-making teams: Whereas agents are superior in computation-intensive activities such as information seeking and filtering, humans are superior in projecting and reasoning about dynamic situations and more adaptable to teammates' cognitive capacities. This study has demonstrated that cognitive agents empowered with NDM models can serve as the teammates and decision aids of human decision makers. Advanced decision support systems built upon such team-aware agents could help achieve reduced cognitive load and effective human-agent collaboration.
The Inference of Perceived Usability From Beauty BIBAFull-Text 235-260
  Marc Hassenzahl; Andrew Monk
A review of 15 papers reporting 25 independent correlations of perceived beauty with perceived usability showed a remarkably high variability in the reported coefficients. This may be due to methodological inconsistencies. For example, products are often not selected systematically, and statistical tests are rarely performed to test the generality of findings across products. In addition, studies often restrict themselves to simply reporting correlations without further specification of underlying judgmental processes.
   The present study's main objective is to re-examine the relation between beauty and usability, that is, the implication that "what is beautiful is usable." To rectify previous methodological shortcomings, both products and participants were sampled in the same way and the data aggregated both by averaging over participants to assess the covariance across ratings of products and by averaging over products to assess the covariance across participants. In addition, we adopted an inference perspective to qualify underlying processes to examine the possibility that, under the circumstances pertaining in most studies of this kind where participants have limited experience of using a website or product, the relationship between beauty and usability is mediated by goodness.
   A mediator analysis of the relationship between beauty, the overall evaluation (i.e., "goodness") and pragmatic quality (as operationalization of usability) suggests that the relationship between beauty and usability has been overplayed as the correlation between pragmatic quality and beauty is wholly mediated by goodness. This pattern of relationships was consistent across four different data sets and different ways of data aggregation. Finally, suggestions are made regarding methodologies that could be used in future studies that build on these results.
Providing Dynamic Visual Information for Collaborative Tasks: Experiments With Automatic Camera Control BIBAFull-Text 261-287
  Jeremy Birnholtz; Abhishek Ranjan; Ravin Balakrishnan
One possibility presented by novel communication technologies is the ability for remotely located experts to provide guidance to others who are performing difficult technical tasks in the real world, such as medical procedures or engine repair. In these scenarios, video views and other visual information seem likely to be useful in the ongoing negotiation of shared understanding, or common ground, but actual results with experimental systems have been mixed. One difficulty in designing these systems is achieving a balance between close-up shots that allow for discussion of detail and wide shots that allow for orientation or establishing a mutual point of focus in a larger space. Achieving this balance can be difficult without disorienting or overloading task participants. In this article we present results from two experiments involving three automated camera control systems for remote repair tasks. Results show that a system providing both detailed and overview information was superior to systems providing only one or the other in terms of performance but that some participants preferred the detail-only system.

HCI 2010 Volume 25 Issue 4

Toward Spoken Human-Computer Tutorial Dialogues BIBAFull-Text 289-323
  Sidney K. D'Mello; Art Graesser; Brandon King
Oral discourse is the primary form of human-human communication, hence, computer interfaces that communicate via unstructured spoken dialogues will presumably provide a more efficient, meaningful, and naturalistic interaction experience. Within the context of learning environments, there are theoretical positions supporting a speech facilitation hypothesis that predicts that spoken tutorial dialogues will increase learning more than typed dialogues. We evaluated this hypothesis in an experiment where 24 participants learned computer literacy via a spoken and a typed conversation with AutoTutor, an intelligent tutoring system with conversational dialogues. The results indicated that (a) enhanced content coverage was achieved in the spoken condition; (b) learning gains for both modalities were on par and greater than a no-instruction control; (c) although speech recognition errors were unrelated to learning gains, they were linked to participants' evaluations of the tutor; (d) participants adjusted their conversational styles when speaking compared to typing; (e) semantic and statistical natural language understanding approaches to comprehending learners' responses were more resilient to speech recognition errors than syntactic and symbolic-based approaches; and (f) simulated speech recognition errors had differential impacts on the fidelity of different semantic algorithms. We discuss the impact of our findings on the speech facilitation hypothesis and on human-computer interfaces that support spoken dialogues.
Direct Pen Interaction With a Conventional Graphical User Interface BIBAFull-Text 324-388
  Daniel Vogel; Ravin Balakrishnan
We examine the usability and performance of Tablet PC direct pen input with a conventional graphical user interface (GUI). We use a qualitative observational study design with 16 participants divided into 4 groups: 1 mouse group for a baseline control and 3 Tablet PC groups recruited according to their level of experience. The study uses a scripted scenario of realistic tasks and popular office applications designed to exercise standard GUI components and cover typical interactions such as parameter selection, object manipulation, text selection, and ink annotation. We capture a rich set of logging data including 3D motion capture, video taken from the participants' point-of-view, screen capture video, and pen events such as movement and taps. To synchronize, segment, and annotate these logs, we used our own custom analysis software.
   We find that pen participants make more errors, perform inefficient movements, and express frustration during many tasks. Our observations reveal overarching problems with direct pen input: poor precision when tapping and dragging, errors caused by hand occlusion, instability and fatigue due to ergonomics and reach, cognitive differences between pen and mouse usage, and frustration due to limited input capabilities. We believe these to be the primary causes of nontext errors, which contribute to user frustration when using a pen with a conventional GUI. Finally, we discuss how researchers could address these issues without sacrificing the consistency of current GUIs and applications by making improvements at three levels: hardware, base interaction, and widget behavior.