HCI Bibliography Home | HCI Journals | About TOCHI | Journal Info | TOCHI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TOCHI Tables of Contents: 010203040506070809101112131415161718192021

ACM Transactions on Computer-Human Interaction 14

Editors:John M. Carroll
Dates:2007
Volume:14
Publisher:ACM
Standard No:ISSN 1073-0516
Papers:21
Links:Table of Contents
  1. TOCHI 2007 Volume 14 Issue 1
  2. TOCHI 2007 Volume 14 Issue 2
  3. TOCHI 2007 Volume 14 Issue 3
  4. TOCHI 2007 Volume 14 Issue 4

TOCHI 2007 Volume 14 Issue 1

Model-based evaluation of expert cell phone menu interaction BIBAFull-Text 1
  Robert St. Amant; Thomas E. Horton; Frank E. Ritter
We describe concepts to support the analysis of cell phone menu hierarchies, based on cognitive models of users and easy-to-use optimization techniques. We present an empirical study of user performance on five simple tasks of menu traversal on an example cell phone. Two of the models applied to these tasks, based on GOMS and ACT-R, give good predictions of behavior. We use the empirically supported models to create an effective evaluation and improvement process for menu hierarchies. Our work makes three main contributions: a novel and timely study of a new, very common HCI task; new versions of existing models for accurately predicting performance; and a search procedure to generate menu hierarchies that reduce traversal time, in simulation studies, by about a third.
PRISM interaction for enhancing control in immersive virtual environments BIBAFull-Text 2
  Scott Frees; G. Drew Kessler; Edwin Kay
When directly manipulating 3D objects in an immersive environment we cannot normally achieve the accuracy and control that we have in the real world. This reduced accuracy stems from hand instability. We present PRISM, which dynamically adjusts the C/D ratio between the hand and the controlled object to provide increased control when moving slowly and direct, unconstrained interaction when moving rapidly. We describe PRISM object translation and rotation and present user studies demonstrating their effectiveness. In addition, we describe a PRISM-enhanced version of ray casting which is shown to increase the speed and accuracy of object selection.
A field evaluation of an adaptable two-interface design for feature-rich software BIBAFull-Text 3
  Joanna McGrenere; Ronald M. Baecker; Kellogg S. Booth
Two approaches for supporting personalization in complex software are system-controlled adaptive menus and user-controlled adaptable menus. We evaluate a novel interface design for feature-rich productivity software based on adaptable menus. The design allows the user to easily customize a personalized interface, and also supports quick access to the default interface with all of the standard features. This design was prototyped as a front-end to a commercial word processor. A field experiment investigated users' personalizing behavior and tested the effects of different interface designs on users' satisfaction and their perceived ability to navigate, control, and learn the software. There were two conditions: a commercial word processor with adaptive menus and our prototype with adaptable menus for the same word processor. Our evaluation shows: (1) when provided with a flexible, easy-to-use and easy-to-understand customization mechanism, the majority of users do effectively personalize their interface; and (2) user-controlled interface adaptation with our adaptable menus results in better navigation and learnability, and allows for the adoption of different personalization strategies, as compared to a particular system-controlled adaptive menu system that implements a single strategy. We report qualitative data obtained from interviews and questionnaires with participants in the evaluation in addition to quantitative data.
Design parameters of rating scales for web sites BIBAFull-Text 4
  Paul Van Schaik; Jonathan Ling
The effects of design parameters of rating scales on the perceived quality of interaction with web sites were investigated, using four scales (Disorientation, Perceived ease of use, Perceived usefulness and Flow). Overall, the scales exhibited good psychometric properties. In Experiment 1, psychometric results generally converged between two response formats (visual analogue scale and Likert scale). However, in Experiment 2, presentation of one questionnaire item per page was better than all items presented on a single page and direct interaction (using radio buttons) was better than indirect interaction (using a drop-down box). Practical implications and a framework for measurement are presented.
Approaching and leave-taking: Negotiating contact in computer-mediated communication BIBAFull-Text 5
  John C. Tang
A major difference between face-to-face interaction and computer-mediated communication is how contact negotiation -- the way in which people start and end conversations -- is managed. Contact negotiation is especially problematic for distributed group members who are separated by distance and thus do not share many of the cues needed to help mediate interaction. An understanding of what resources and cues people use to negotiate making contact when face-to-face identifies ways to design support for contact negotiation in new technology to support remote collaboration. This perspective is used to analyze the design and use experiences with three communication prototypes: Desktop Conferencing Prototype, Montage, and Awarenex. These prototypes use text, video, and graphic indicators to share the cues needed to gracefully start and end conversations. Experiences with using these prototypes focused on how these designs support the interactional commitment of the participants -- when they have to commit their attention to an interaction and how flexibly that can be negotiated. Reviewing what we learned from these research experiences identifies directions for future research in supporting contact negotiation in computer-mediated communication.

TOCHI 2007 Volume 14 Issue 2

Untangling the usability of fisheye menus BIBAFull-Text 6
  Kasper Hornbæk; Morten Hertzum
Fisheye menus have become a prominent example of fisheye interfaces, yet contain several nonfisheye elements and have not been systematically evaluated. This study investigates whether fisheye menus are useful, and tries to untangle the impact on usability of the following properties of fisheye menus: use of distortion, index of letters for coarse navigation, and the focus-lock mode for accurate movement. Twelve participants took part in an experiment comparing fisheye menus with three alternative menu designs across known-item and browsing tasks, as well as across alphabetical and categorical menu structures. The results show that for finding known items, conventional hierarchical menus are the most accurate and by far the fastest. In addition, participants rate the hierarchical menu as more satisfying than fisheye and multifocus menus, but do not consistently prefer any one menu. For browsing tasks, the menus neither differ with respect to accuracy nor selection time. Eye-movement data show that participants make little use of nonfocus regions of the fisheye menu, though these are a defining feature of fisheye interfaces. Nonfocus regions are used more with the multifocus menu, which enlarges important menu items in these regions. With the hierarchical menu, participants make shorter fixations and have shorter scanpaths, suggesting lower requirements for mental activity and visual search. We conclude by discussing why fisheye menus are inferior to the hierarchical menu and how both may be improved.
Constructing reality: A study of remote, hands-on, and simulated laboratories BIBAFull-Text 7
  James E. Corter; Jeffrey V. Nickerson; Sven K. Esche; Constantin Chassapis; Seongah Im; Jing Ma
Laboratories play a crucial role in the education of future scientists and engineers, yet there is disagreement among science and engineering educators about whether and which types of technology-enabled labs should be used. This debate could be advanced by large-scale randomized studies addressing the critical issue of whether remotely operated or simulation-based labs are as effective as the traditional hands-on lab format. The present article describes the results of a large-scale (N = 306) study comparing learning outcomes and student preferences for several different lab formats in an undergraduate engineering course. The lab formats that were evaluated included traditional hands-on labs, remotely operated labs, and simulations. Learning outcomes were assessed by a test of the specific concepts taught in each lab. These knowledge scores were as high or higher (depending on topic) after performing remote and simulated laboratories versus performing hands-on laboratories. In their responses to survey items, many students saw advantages to technology-enabled lab formats in terms of such attributes as convenience and reliability, but still expressed preference for hands-on labs. Also, differences in lab formats led to changes in group functions across the plan-experiment-analyze process: For example, students did less face-to-face work when engaged in remote or simulated laboratories, as opposed to hands-on laboratories.
Modeling the effects of delayed haptic and visual feedback in a collaborative virtual environment BIBAFull-Text 8
  Caroline Jay; Mashhuda Glencross; Roger Hubbold
Collaborative virtual environments (CVEs) enable two or more people, separated in the real world, to share the same virtual "space." They can be used for many purposes, from teleconferencing to training people to perform assembly tasks. Unfortunately, the effectiveness of CVEs is compromised by one major problem: the delay that exists in the networks linking users together. Whilst we have a good understanding, especially in the visual modality, of how users are affected by delayed feedback from their own actions, little research has systematically examined how users are affected by delayed feedback from other people, particularly in environments that support haptic (force) feedback. The current study addresses this issue by quantifying how increasing levels of latency affect visual and haptic feedback in a collaborative target acquisition task. Our results demonstrate that haptic feedback in particular is very sensitive to low levels of delay. Whilst latency affects visual feedback from 50 ms, it impacts on haptic task performance 25 ms earlier, and causes the haptic measures of performance deterioration to rise far more steeply than visual. The "impact-perceive-adapt" model of user performance, which considers the interaction between performance measures, perception of latency, and the breakdown of perception of immediate causality, is proposed as an explanation for the observed pattern of performance.
Evaluating the boundary conditions of the technology acceptance model: An exploratory investigation BIBAFull-Text 9
  Hock Chuan Chan; Hock-Hai Teo
The technology acceptance model (TAM) is very widely used for studying technology acceptance. The model states that an individual's behavioral intention (BI) to use an information system is determined by his perceived usefulness (PU) and perceived ease of use (PEOU) of it. While many studies have applied the TAM, none has examined the model's behavior over its entire value range. We conducted two surveys to examine the values of BI over the two-dimensional boundary space formed by PU and PEOU. Contrary to current understanding, we find that the effects of PU and PEOU vary over the boundary space.
SADIe: Structural semantics for accessibility and device independence BIBAFull-Text 10
  Simon Harper; Sean Bechhofer
Visually impaired users are hindered in their efforts to access the largest repository of electronic information in the world, namely, the World Wide Web (web). A visually impaired user's information and presentation requirements are different from a sighted user's. These requirements can become problems in that the web is visually centric with regard to presentation and information order/layout. Finding semantic information already encoded directly into documents can help to alleviate these problems. Our approach can be loosely described as follows. For a particular cascading stylesheet (CSS), we provide an extension to an upper-level ontology which represents the interface between web documents and the programmatic transformation mechanism. This extension gives the particular characteristics of the elements appearing in that specific CSS. We can consider this extension to be an annotation of the CSS elements implicitly encoded into the web document. This means that one ontology can be used to accurately transform every web document that references the CSS used to generate that ontology. Simply one ontology accuratly transforms an entire site using a generalized programmatic machinery able to cope with all sites using CSS. Here we describe our method, implementation, and technical evaluation.

TOCHI 2007 Volume 14 Issue 3

Introduction to special issue on computers and accessibility BIBFull-Text 11
  Andrew Sears; Vicki L. Hanson; Brad Myers

Special Issue on Computers and Accessibility

Web accessibility for individuals with cognitive deficits: A comparative study between an existing commercial Web and its cognitively accessible equivalent BIBAFull-Text 12
  Javier Sevilla; Gerardo Herrera; Bibiana Martínez; Francisco Alcantud
Tim Berners-Lee claimed in 2001 that "the power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect". A considerable amount of work has been done to make the web accessible to those with sensory or motor disability, with an increasing number of government and enterprise intranet webs being "accessible", and also with some consortiums and groups seriously approaching this commitment. Some authors, such as Harrysson, have already highlighted the need for a cognitively accessible web. However, in spite of good intentions, there has been little work to date that has tackled this task. At least until now, the existing WAI and NI4 recommendations about cognitive disability are extremely difficult (if not impossible) to test, as they are only general recommendations. This article explains an alternative Web that has been constructed and tested on a sample of participants with cognitive disabilities (N = 20) with positive results encouraging us to dedicate more effort to fine tune their requirements regarding specific cognitive deficits and automating the process of creating and testing cognitively accessible web content. This alternative web implies the use of a simplified web browser and an adequate web design. Discussion of the need to have several levels of cognitive accessibility, equivalent (although not identical) content for this collective and the need for testable protocols of accessibility that support these people's needs is also included. This article finishes with conclusions about the potential impact of accessible pages in the daily life of people suffering from cognitive deficits, outlining the features to be considered within a user profile specification that support cognitive difficulties and with reflections about the suitability of Semantic Web Technologies for future developments in this field.
Analysis of navigability of Web applications for improving blind usability BIBAFull-Text 13
  Hironobu Takagi; Shin Saito; Kentarou Fukuda; Chieko Asakawa
Various accessibility activities are improving blind access to the increasingly indispensable WWW. These approaches use various metrics to measure the Web's accessibility. "Ease of navigation" (navigability) is one of the crucial factors for blind usability, especially for complicated webpages used in portals and online shopping sites. However, it is difficult for automatic checking tools to evaluate the navigation capabilities even for a single webpage. Navigability issues for complete Web applications are still far beyond their capabilities.
   This study aims at obtaining quantitative results about the current accessibility status of real world Web applications, and analyzes real users' behavior on such websites. In Study 1, an automatic analysis method for webpage navigability is introduced, and then a broad survey using this method for 30 international online shopping sites is described. The next study (Study 2) focuses on a fine-grained analysis of real users' behavior on some of these online shopping sites. We modified a voice browser to record each user's actions and the information presented to that user. We conducted user testing on existing sites with this tool. We also developed an analysis and visualization method for the recorded information. The results showed us that users strongly depend on scanning navigation instead of logical navigation. A landmark-oriented navigation model was proposed based on the results. Finally, we discuss future possibilities for improving navigability, including proposals for voice browsers.
Evaluating DANTE: Semantic transcoding for visually disabled users BIBAFull-Text 14
  Yeliz Yesilada; Robert Stevens; Simon Harper; Carole Goble
The importance of the World Wide Web for information dissemination is indisputable. However, the dominance of visual design on the Web leaves visually disabled people at a disadvantage. Although assistive technologies, such as screen readers, usually provide basic access to information, the richness of the Web experience is still often lost. In particular, traversing the Web becomes a complicated task since the richness of visual objects presented to their sighted counterparts are neither appropriate nor accessible to visually disabled users. To address this problem, we have proposed an approach called Dante in which Web pages are annotated with semantic information to make their traversal properties explicit. Dante supports usage of different annotation techniques and as a proof-of-concept in this article, pages are annotated manually which when transcoded become rich. We first introduce Dante and then present a user evaluation which compares how visually disabled users perform certain travel-related tasks on original and transcoded versions of Web pages. We discuss the evaluation methodology in detail and present our findings, which provide useful insights into the transcoding process. Our evaluation shows that, in tests with users, document objects transcoded with Dante have a tendency to be much easier for visually disabled users to interact with when traversing Web pages.
Providing signed content on the Internet by synthesized animation BIBAFull-Text 15
  J. R. Kennaway; J. R. W. Glauert; I. Zwitserlood
Written information is often of limited accessibility to deaf people who use sign language. The eSign project was undertaken as a response to the need for technologies enabling efficient production and distribution over the Internet of sign language content. By using an avatar-independent scripting notation for signing gestures and a client-side web browser plug-in to translate this notation into motion data for an avatar, we achieve highly efficient delivery of signing, while avoiding the inflexibility of video or motion capture. Tests with members of the deaf community have indicated that the method can provide an acceptable quality of signing.

TOCHI 2007 Volume 14 Issue 4

Proactive displays: Supporting awareness in fluid social environments BIBAFull-Text 16
  David W. McDonald; Joseph F. McCarthy; Suzanne Soroczak; David H. Nguyen; Al M. Rashid
Academic conferences provide a social space for people to present their work and interact with one another. However, opportunities for interaction are unevenly distributed among the attendees. We seek to extend the opportunities for interaction among attendees by using technology to enable them to reveal information about their background and interests in different settings. We evaluate a suite of applications that augment three physical social spaces at an academic conference. The applications were designed to augment formal conference paper sessions and informal breaks. A mixture of qualitative observation and survey response data are used to frame the impacts from both individual and group perspectives. Respondents reported on their interactions and serendipitous findings of shared interests with other attendees. However, some respondents also identify distracting aspects of the augmentation. Our discussion relates these results to existing theory of group behavior in public places and how these social space augmentations relate to awareness as well as the problem of shared interaction models.
Subjunctive interfaces: Extending applications to support parallel setup, viewing and control of alternative scenarios BIBAFull-Text 17
  Aran Lunzer; Kasper Hornbæk
Many applications require exploration of alternative scenarios; most support it poorly. Subjunctive interfaces provide mechanisms for the parallel setup, viewing and control of scenarios, aiming to support users' thinking about and interaction with their choices. We illustrate how applications for information access, real-time simulation, and document design may be extended with these mechanisms. To investigate the usability of this form of extension, we compare a simple census browser against a version with a subjunctive interface. In the first of three studies, subjects reported higher satisfaction with the subjunctive interface, and relied less on interim marks on paper. No reduction in task completion time was found, however, mainly because some subjects encountered problems in setting up and controlling scenarios. At the end of a second, five-session study, users of a redesigned interface completed tasks 27% more quickly than with the simple interface. In the third study we examined how subjects reasoned about multiple-scenario setups in pursuing complex, open-ended data explorations. Our main observation was that subjects treated scenarios as information holders, using them creatively in various ways to facilitate task completion.
Papiercraft: A gesture-based command system for interactive paper BIBAFull-Text 18
  Chunyuan Liao; François Guimbretière; Ken Hinckley; Jim Hollan
Paper persists as an integral component of active reading and other knowledge-worker tasks because it provides ease of use unmatched by digital alternatives. Paper documents are light to carry, easy to annotate, rapid to navigate, flexible to manipulate, and robust to use in varied environments. Interactions with paper documents create rich webs of annotation, cross reference, and spatial organization. Unfortunately, the resulting webs are confined to the physical world of paper and, as they accumulate, become increasingly difficult to store, search, and access. XLibris [Schilit et al. 1998] and similar systems address these difficulties by simulating paper with tablet PCs. While this approach is promising, it suffers not only from limitations of current tablet computers (e.g., limited screen space) but also from loss of invaluable paper affordances.
   In this article, we describe PapierCraft, a gesture-based command system that allows users to manipulate digital documents using paper printouts as proxies. Using an Anoto [Anoto 2002] digital pen, users can draw command gestures on paper to tag a paragraph, e-mail a selected area, copy selections to a notepad, or create links to related documents. Upon pen synchronization, PapierCraft executes the commands and presents the results in a digital document viewer. Users can then search the tagged information and navigate the web of annotated digital documents resulting from interactions with the paper proxies. PapierCraft also supports real time interactions across mix-media, for example, letting users copy information from paper to a Tablet PC screen. This article presents the design and implementation of the PapierCraft system and describes user feedback from initial use.
Comparing usability of one-way and multi-way constraints for diagram editing BIBAFull-Text 19
  Michael Wybrow; Kim Marriott; Linda Mciver; Peter J. Stuckey
We investigate the usability of constraint-based alignment and distribution placement tools in diagram editors. Currently one-way constraints are used to provide alignment and distribution tools in many commercial editors. We believe the limitations of these constraints lead to serious usability issues, and thus suggest that such tools be implemented using multi-way constraints. We have conducted two usability studies, the first studies we are aware of that examine the relative usefulness of interactive graphical tools based on one-way and multi-way constraints. They provide strong evidence that multi-way constraint-based alignment and distribution tools are more usable than one-way constraint-based alignment and distribution tools.
Metaphors of human thinking for usability inspection and design BIBAFull-Text 20
  Erik Frøkjær; Kasper Hornbæk
Usability inspection techniques are widely used, but few focus on users' thinking and many are appropriate only for particular devices and use contexts. We present a new technique (MOT) that guides inspection by metaphors of human thinking. The metaphors concern habit, the stream of thought, awareness and associations, the relation between utterances and thought, and knowing. The main novelty of MOT is its psychological basis combined with its use of metaphors to stimulate inspection. The first of three experiments shows that usability problems uncovered with MOT are more serious and more complex to repair than problems found with heuristic evaluation. Problems found with MOT are also judged more likely to persist for expert users. The second experiment shows that MOT finds more problems than cognitive walkthrough, and has a wider coverage of a reference collection of usability problems. Participants prefer using MOT over cognitive walkthrough; an important reason being the wider scope of MOT. The third experiment compares MOT, cognitive walkthrough, and think aloud testing, in the context of nontraditional user interfaces. Participants prefer using think aloud testing, but identify few problems with that technique that are not found also with MOT or cognitive walkthrough. MOT identifies more problems than the other techniques. Across experiments and measures of usability problems' utility in systems design, MOT performs better than existing inspection techniques and is comparable to think aloud testing.
Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management BIBAFull-Text 21
  Brian P. Bailey; Shamsi T. Iqbal
Notifications can have reduced interruption cost if delivered at moments of lower mental workload during task execution. Cognitive theorists have speculated that these moments occur at subtask boundaries. In this article, we empirically test this speculation by examining how workload changes during execution of goal-directed tasks, focusing on regions between adjacent chunks within the tasks, that is, the subtask boundaries. In a controlled experiment, users performed several interactive tasks while their pupil dilation, a reliable measure of workload, was continuously measured using an eye tracking system. The workload data was extracted from the pupil data, precisely aligned to the corresponding task models, and analyzed. Our principal findings include (i) workload changes throughout the execution of goal-directed tasks; (ii) workload exhibits transient decreases at subtask boundaries relative to the preceding subtasks; (iii) the amount of decrease tends to be greater at boundaries corresponding to the completion of larger chunks of the task; and (iv) different types of subtasks induce different amounts of workload. We situate these findings within resource theories of attention and discuss important implications for interruption management systems.