HCI Bibliography Home | HCI Journals | About HCI | Journal Info | HCI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
HCI Tables of Contents: 181920212223242526272829

Human-Computer Interaction 2828

Editors:Thomas P. Moran
Dates:2013
Volume:28
Publisher:Taylor and Francis Group
Standard No:ISSN 0737-0024
Papers:16
Links:Table of Contents
  1. HCI 2013-01-01 Volume 2828 Issue 1
  2. HCI 2013-03-01 Volume 2828 Issue 2
  3. HCI 2013-05-01 Volume 2828 Issue 3
  4. HCI 2013-07-01 Volume 2828 Issue 4
  5. HCI 2013-09-01 Volume 2828 Issue 5
  6. HCI 2013-11-02 Volume 2828 Issue 6

HCI 2013-01-01 Volume 2828 Issue 1

Using Visual Information for Grounding and Awareness in Collaborative Tasks BIBAFull-Text 1-39
  Darren Gergle; Robert E. Kraut; Susan R. Fussell
When pairs work together on a physical task, seeing a common workspace facilitates communication and benefits performance. When mediating such activities, however, the choice of technology can transform the visual information in ways that impact critical coordination processes. In this article we examine two coordination processes that are impacted by visual information -- situation awareness and conversational grounding -- which are theoretically distinct but often confounded in empirical research. We present three empirical studies that demonstrate how shared visual information supports collaboration through these two distinct routes. We also address how particular features of visual information interact with features of the task to influence situation awareness and conversational grounding, and further demonstrate how these features affect conversation and coordination. Experiment 1 manipulates the immediacy of the visual information and shows that immediate visual feedback facilitates collaboration by improving both situation awareness and conversational grounding. In Experiment 2, by misaligning the perspective through which the Worker and Helper see the work area we disrupt the ability of visual feedback to support conversational grounding but not situation awareness. The findings demonstrate that visual information supports the central mechanism of conversational grounding. Experiment 3 disrupts the ability of visual feedback to support situation awareness by reducing the size of the common viewing area. The findings suggest that visual information independently supports both situation awareness and conversational grounding. We conclude with a general discussion of the results and their implications for theory development and the future design of collaborative technologies.
Understanding the Role of Body Movement in Player Engagement BIBAFull-Text 40-75
  Nadia Bianchi-Berthouze
The introduction of full-body controllers has made computer games more accessible and promises to provide a more natural and engaging experience to players. However, the relationship between body movement and game engagement is not yet well understood. In this article, I consider how body movement affects the player's experience during game play. I start by presenting a taxonomy of body movements observed during game play. These are framed in the context of a body of previously published research that is then embedded into a novel model of engagement. This model describes the relationship between the taxonomy of movement and the type of engagement that each class of movement facilitates. I discuss the factors that may inhibit or enhance such relationship. Finally, I conclude by considering how the proposed model could lead to a more systematic and effective use of body movement for enhancing game experience.

HCI 2013-03-01 Volume 2828 Issue 2

Analyzing the Adequacy of Interaction Paradigms in Artificial Reality Experiences BIBAFull-Text 77-114
  Narcís Parés; David Altimira
In Artificial Reality experiences, that is, interactive, unencumbered, full-body, 2D vision-based virtual reality (VR) experiences (heirs of the seminal Videoplace by Myron Krueger), there are two possible interaction paradigms, namely, first-person and third-person -- which differ significantly from the classic VR first- and third-person notions. Up until now, these two paradigms had not been compared or objectively analyzed in such systems. Moreover, most systems are based on the third-person paradigm without a specific justification, most probably due to the influence of the original Videoplace system and because it is the only paradigm available in commercial development tools and leisure systems. For example, many rehabilitation projects have chosen to use these VR systems because of their many advantages. However, most of these projects and research have blindly adopted the third-person paradigm. Hence, the field of virtual rehabilitation has analyzed the beneficial properties of these systems without considering the first-person paradigm that could potentially present better adequacy. To find and understand potential differences between the two paradigms, we have defined an application categorization from which we developed two full-body interactive games and set up an experiment to analyze each game in both paradigms. We studied how 39 participants played these games and we quantitatively and qualitatively analyzed how each paradigm influenced the experience, the activity and the behavior of the users and the efficiency in accomplishing the required goals. We present the results of these experiments and their general implications, and especially for virtual rehabilitation due to the potential impact these systems may have in the well-being of many people.
Enhancing Musical Experience for the Hearing-Impaired Using Visual and Haptic Displays BIBAFull-Text 115-160
  Suranga Chandima Nanayakkara; Lonce Wyse; S. H. Ong; Elizabeth A. Taylor
This article addresses the broad question of understanding whether and how a combination of tactile and visual information could be used to enhance the experience of music by the hearing impaired. Initially, a background survey was conducted with hearing-impaired people to find out the techniques they used to "listen" to music and how their listening experience might be enhanced. Information obtained from this survey and feedback received from two profoundly deaf musicians were used to guide the initial concept of exploring haptic and visual channels to augment a musical experience. The proposed solution consisted of a vibrating "Haptic Chair" and a computer display of informative visual effects. The Haptic Chair provided sensory input of vibrations via touch by amplifying vibrations produced by music. The visual display transcoded sequences of information about a piece of music into various visual sequences in real time. These visual sequences initially consisted of abstract animations corresponding to specific features of music such as beat, note onset, tonal context, and so forth. In addition, because most people with impaired hearing place emphasis on lip reading and body gestures to help understand speech and other social interactions, their experiences were explored when they were exposed to human gestures corresponding to musical input. Rigorous user studies with hearing-impaired participants suggested that musical representation for the hearing impaired should focus on staying as close to the original as possible and is best accompanied by conveying the physics of the representation via an alternate channel of perception. All the hearing-impaired users preferred either the Haptic Chair alone or the Haptic Chair accompanied by a visual display. These results were further strengthened by the fact that user satisfaction was maintained even after continuous use of the system over a period of 3 weeks. One of the comments received from a profoundly deaf user when the Haptic Chair was no longer available ("I am going to be deaf again"), poignantly expressed the level of impact it had made. The system described in this article has the potential to be a valuable aid in speech therapy, and a user study is being carried out to explore the effectiveness of the Haptic Chair for this purpose. It is also expected that the concepts presented in this paper would be useful in converting other types of environmental sounds into a visual display and/or a tactile input device that might, for example, enable a deaf person to hear a doorbell ring, footsteps approaching from behind, or a person calling him or her, or to make understanding conversations or watching television less stressful. Moreover, the prototype system could be used as an aid in learning to play a musical instrument or to sing in tune. This research work has shown considerable potential in using existing technology to significantly change the way the deaf community experiences music. We believe the findings presented here will add to the knowledge base of researchers in the field of human-computer interaction interested in developing systems for the hearing impaired.
Tests of Concepts About Different Kinds of Minds: Predictions About the Behavior of Computers, Robots, and People BIBAFull-Text 161-191
  Daniel T. Levin; Stephen S. Killingsworth; Megan M. Saylor; Stephen M. Gordon; Kazuhiko Kawamura
This research investigates adults' understanding of differences in the basic nature of intelligence exhibited by humans and by machines such as computers and robots. We tested these intuitions by asking participants to make predictions about the behaviors of different entities in situations where actions could be based on either goal-directed intentional thought or more mechanical nonintentional thought. Across several studies, adults made more intentional predictions about the behavior of humans than about the behavior of robots or computers. Although initial experiments demonstrated that participants made very similar predictions for computers and anthropomorphic robots, when asked to track robots' attention to objects, participants began to predict more intentional behaviors for the robot. A multiple regression demonstrated that differential behavioral predictions about mechanical and human entities were associated with ratings of goal understanding but not overall intelligence of current computers/robots. These findings suggest that people differentiate humans and computers along the lines of intentionality but initially equate robots and computers. However, the tendency to equate computers and robots can be at least partially overridden when attention is focused on robots engaging in intentional behavior.

HCI 2013-05-01 Volume 2828 Issue 3

Making Graph-Based Diagrams Work in Sound: The Role of Annotation BIBAFull-Text 193-221
  Andy Brown; Robert Stevens; Steve Pettifer
Nonlinear forms of diagrammatic presentation, such as node-arc graphs, are a powerful and elegant means of visual information presentation. Although providing nonvisual access is now routine for many forms of linear information, it becomes more difficult as the structure of the information becomes increasingly nonlinear. An understanding of the ways in which graphs benefit sighted people, based on experiments and the literature, together with the difficulties encountered when exploring graphs nonvisually, helps form a solution for nonvisual access to graphs. This article proposes that differing types of annotation offer a powerful and flexible technique for transferring the benefits of graph-based diagrams, as well as for reducing disorientation while moving around the graph and for tackling some of the inherent disadvantages of using sound. Different forms of annotation that may address these problems are explored, classified, and evaluated, including notes designed to summarise and to aid node differentiation. Graph annotation may be performed automatically, creating a graph that evaluation shows requires less mental effort to explore and on which tasks can be achieved more effectively and more efficiently.
Challenges and Opportunities for Mathematics Software in Expert Problem Solving BIBAFull-Text 222-264
  Andrea Bunt; Michael Terry; Edward Lank
Computer Algebra Systems and matrix-based mathematics packages provide sophisticated functionality to assist with mathematical problem solving. However, despite their widespread adoption, little work in the human-computer interaction community has examined the extent to which these computational tools support expert problem solving. In this article, we report findings from a qualitative study comparing and contrasting the work practices and software use of practicing researchers in mathematics and engineering who share the goal of developing and defending new mathematical formulations. Our findings indicate that although computational tools are used by both groups to support their work, current mathematics software plays a relatively minor, somewhat untrusted role in the process. Our data suggest that five primary factors limit the applicability of current mathematics software to expert work practices: (a) a lack of transparency in how current software derives its computed results; (b) the lack of clearly defined operational boundaries indicating whether the system can meaningfully operate on the user's input (whether expressions or data); (c) the need for free-form two-dimensional input to support annotations, diagrams, and in-place manipulation of objects of interest; (d) the potential for transcription problems when switching between physical and computational media; and (e) the need for collaboration, particularly in early stages of problem solving. Each of these issues suggests a concrete direction for future improvement of mathematics software for experts. These findings also have more general implications for the design of computational systems intended to support complex problem solving.
Paratyping: A Contextualized Method of Inquiry for Understanding Perceptions of Mobile and Ubiquitous Computing Technologies BIBAFull-Text 265-286
  Gillian R. Hayes; Khai N. Truong
In this article, we describe the origins, use, and efficacy of a contextualized method for evaluating mobile and ubiquitous computing systems. This technique, which we called paratyping, is based on experience prototyping and event-contingent experience sampling and allows researchers to survey people in real-life situations without the need for costly and sometimes untenable deployment evaluations. We used this tool to probe the perceptions of the conversation partners of users of the Personal Audio Loop, a memory aid with the potential for substantial privacy implications. Based on that experience, we refined and adapted the approach to evaluate SenseCam, a wearable, automatic picture-taking device, across multiple geographic locations. We describe the benefits, challenges, and methodological considerations that emerged during our use of the paratyping method across these two studies. We describe how this method blends some of the benefits of survey-based research with more contextualized methods, focusing on trustworthiness of the method in terms of generating scientific knowledge. In particular, this method is a good fit for studying certain classes of mobile and ubiquitous computing applications but can be applied to many types of applications.

HCI 2013-07-01 Volume 2828 Issue 4

The Routineness of Routines: Measuring Rhythms of Media Interaction BIBAFull-Text 287-334
  Norman Makoto Su; Oliver Brdiczka; Bo Begole
The routines of information work are commonplace yet difficult to characterize. Although cognitive models have successfully characterized routine tasks within which there is little variation, a large body of ethnomethodological research has identified the inherent nonroutineness of routines in information work. We argue that work does not fall into discrete classes of routine versus nonroutine; rather, task performance lies on a continuum of routineness, and routineness metrics are important to the understanding of workplace multitasking. In a study of 10 information workers shadowed for 3 whole working days each, we utilize the construct of working sphere to model projects/tasks as a network of humans and artifacts. Employing a statistical technique called T-pattern analysis, we derive measures of routineness from these real-world data. In terms of routineness, we show that information workers experience archetypes of working spheres. The results indicate that T-patterns of interactions with information and computational media are important indicators of facets of routineness and that these measures are correlated with workers' affective states. Our results are some of the first to demonstrate how regular temporal patterns of media interaction in tasks are related to stress. These results suggest that designs of systems to facilitate so-called routine work should consider the degrees to which a person's working spheres fall along varying facets of routineness.
An Observational Study of Dual Display Usage in University Classroom Lectures BIBAFull-Text 335-377
  Joel Lanir; Kellogg S. Booth; Steven A. Wolfman
We report a study of how dual display screens were used in classroom lectures for university-level courses across a variety of disciplines during five academic terms over a 2-year period. Our goal was to understand the pedagogical consequences of using more than a single electronic display screen to support classroom lectures. We deployed an in-house software system (MultiPresenter) in real classrooms. We examined the use of MultiPresenter by 8 university instructors who taught 15 courses with a total of 1,147 students during 13-week regular terms or 6-week summer terms. We observed classroom lectures, interviewed instructors, collected screen images and log files of MultiPresenter usage, and administered questionnaires to students about their subjective impressions. Based on these data, we analyzed how instructors used MultiPresenter in order to identify examples of how multiple display screens might best be used for educational purposes. The analysis revealed that the following practices are beneficial: the ability to keep information persistent for extended periods, the increased flexibility in where and when information is shown, capability for side-by-side comparison of full screens of information, simultaneous visibility of both overview ("roadmap") and detailed ("content") information, and extra space to annotate information. Possible hazards include difficulty focusing on specific information amidst a large amount of information and too much information changing too quickly without proper indication of the changes.

HCI 2013-09-01 Volume 2828 Issue 5

Teaching Robots Style: Designing and Evaluating Style-by-Demonstration for Interactive Robotic Locomotion BIBAFull-Text 379-416
  James E. Young; Ehud Sharlin; Takeo Igarashi
In this article we present a multipart formal design and evaluation of the style-by-demonstration (SBD) approach to creating interactive robot behaviors: enabling people to design the style of interactive robot behaviors by providing an exemplar. We first introduce our Puppet Master SBD algorithm that enables the creation of interactive robot behaviors with a focus on style: Users provide an example demonstration of human-robot interaction and Puppet Master uses this to generate real-time interactive robot output that matches the demonstrated style. We further designed and implemented original interfaces for demonstrating interactive robot style and for interacting with the resulting robot behaviors. Following, we detail a set of studies we performed to appraise users' reactions to and acceptance of the SBD interaction design approach, the effectiveness of the underlying Puppet Master algorithm, and the usability of the demonstration interfaces. Fundamentally, this article investigates the broad questions of how people respond to SBD interaction, how they engage SBD interfaces, how SBD can be practically realized, and how the SBD approach to social human-robot interaction can be employed in future interaction design.
Alternatives to Eye Tracking for Predicting Stimulus-Driven Attentional Selection Within Interfaces BIBAFull-Text 417-441
  Christopher M. Masciocchi; Jeremiah D. Still
The visual properties of a design contribute to the formation of regions with differing amounts of uniqueness, or salience, producing an initial stimulus-driven attentional bias. The colocation of salient regions and critical information should be maximized as this increases the interface's usability by decreasing search times. The determination of salient locations, however, is often difficult. In web page design, eye tracking has traditionally been used to measure where users attend, therefore indicating the salient regions. But, eye tracking as a descriptive technique has many known costs. We propose two alternative methods to eye tracking that can be used to predict which regions of a web page will draw users' stimulus-driven attention: interest point recording and saliency model predictions. Through an empirical investigation we show that the predictions of both methods correlate with the locations fixated by a separate group of participants, and thus these methods are effective alternatives to eye tracking during formative design testing.
Remote Technical Support Requires Diagnosing the End User (Customer) as well as the Computer BIBAFull-Text 442-477
  Jennifer M. Allen; Leo Gugerty; Eric R. Muth; Jenna L. Scisco
A cognitive task analysis (CTA) and a laboratory study examined the cognitive and communicative processes technical support workers use when providing remote, Internet-based technical support, with a particular focus on the role of the end user who is receiving technical support. In the CTA, 6 experienced technical support employees communicated with end users to solve 4 technical support problems while providing the task analyst with a verbal protocol of their thoughts and actions. The results of the CTA include procedural diagrams illustrating the technical support process and the concept of "user diagnosis." The subsequent laboratory study examined how two characteristics of technical support end users, their emotional state and the level of detail in their problem descriptions, affected stress levels and performance of technical support personnel. Angry end users caused more subjective stress to technical support workers than happy users, and technical support employees perceived problems to be more difficult when they were interacting with vague users or angry users. When end users were vague or angry, technical support workers' problem-solving time and performance ratings were significantly worse. Taken together, the results from these studies can be applied to improve the technical support process by providing interfaces that scaffold the communicative aspects of the technical support process and adding training and evaluation for support workers in communication and interpersonal skills.

HCI 2013-11-02 Volume 2828 Issue 6

Machines Outperform Laypersons in Recognizing Emotions Elicited by Autobiographical Recollection BIBAFull-Text 479-517
  Joris H. Janssen; Paul Tacken; J. J. G. (Gert-Jan) de Vries; Egon L. van den Broek; Joyce H. D. M. Westerink; Pim Haselager; Wijnand A. IJsselsteijn
Over the last decade, an increasing number of studies have focused on automated recognition of human emotions by machines. However, performances of machine emotion recognition studies are difficult to interpret because benchmarks have not been established. To provide such a benchmark, we compared machine with human emotion recognition. We gathered facial expressions, speech, and physiological signals from 17 individuals expressing 5 different emotional states. Support vector machines achieved an 82% recognition accuracy based on physiological and facial features. In experiments with 75 humans on the same data, a maximum recognition accuracy of 62.8% was obtained. As machines outperformed humans, automated emotion recognition might be ready to be tested in more practical applications.
Do Pedagogical Agents Enhance Software Training? BIBAFull-Text 518-547
  Hans van der Meij
This study investigates whether a tutorial for software training can be enhanced by adding a pedagogical agent, and whether the type of agent matters (i.e., cognitive, motivational, or mixed). The cognitive agent was designed to stimulate students to process their experiences actively. The motivational agent was designed to increase perceived task relevance and self-efficacy beliefs. A mixed agent combined these features. Process and product data were recorded during and after software training of students from the upper grades of vocational education (M age = 16.2 years). Comparison of scores on performance measures during training revealed a significant advantage of working with the motivational and mixed agents for two important motivational mediators for learning (i.e., strategy systematicity and mood). All students were highly successful during training, improving from an average 30% task completion score on the pretest to a 77% posttest score. On a retention measure 3 weeks later, task completion was still at 66%. Working with the motivational and control agents yielded significantly higher retention scores, whereas working with the motivational and mixed agents led to significantly higher scores on task relevance and self-efficacy beliefs after training. The discussion reflects on the possibilities for improving the internal and external properties of the agents.
Issues Related to HCI Application of Fitts's Law BIBAFull-Text 548-578
  Charles E. Wright; Francis Lee
Taking Fitts's law as a premise -- that is, movement time is a linear function of an appropriate index of difficulty -- we explore three issues related to the collection and reporting of these data from the perspective of application within human-computer interaction. The central question involved two design choices. Whether results obtained using blocked target conditions are representative of performance in situations in which, as is often the case, target conditions vary from movement to movement and how this difference depends on whether discrete or serial (continuous) movements are studied. Although varied target conditions led to longer movement times, the effect was additive, was surprisingly small, and did not depend on whether the movements were discrete or serial. This suggests that evaluating devices or designs using blocked data may be acceptable. With Zhai (2004) we argue against the practice of reporting throughput as a one-dimensional summary for published comparisons of devices or designs. Also questioned is whether analyses using an accuracy-adjusted index of difficulty are appropriate in all design applications.