HCI Bibliography Home | HCI Journals | About TOCHI | Journal Info | TOCHI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TOCHI Tables of Contents: 010203040506070809101112131415161718192021

ACM Transactions on Computer-Human Interaction 16

Editors:John M. Carroll; Shumin Zhai
Dates:2009
Volume:16
Publisher:ACM
Standard No:ISSN 1073-0516
Papers:21
Links:Table of Contents
  1. TOCHI 2009 Volume 16 Issue 1
  2. TOCHI 2009 Volume 16 Issue 2
  3. TOCHI 2009 Volume 16 Issue 3
  4. TOCHI 2009 Volume 16 Issue 4

TOCHI 2009 Volume 16 Issue 1

Enabling nonexpert construction of basic sensor-based systems BIBAFull-Text 1
  Susan Lysecky; Frank Vahid
Technology trends have enabled deployment of low-cost sensor-based systems, but designing customized sensor-based systems to carry out specific tasks still requires costly engineering by experts. We briefly summarize eBlocks, a technology enabling nonexperts to quickly construct basic customized sensor-based systems, without requiring electronics or knowledge of programming languages. We describe experiments illustrating successful construction of Boolean sensor-based systems by novice users, focusing on intuitive logic and state block design. Additionally, we present preliminary experiments demonstrating usability of integer-based blocks and introduce a programmable block and the corresponding configuration methodology intended for nonexpert users.
Using direct and indirect input devices: Attention demands and age-related differences BIBAFull-Text 2
  Anne Collins McLaughlin; Wendy A. Rogers; Arthur D. Fisk
Researchers have suggested that attention is a key moderating variable predicting performance with an input device [Greenstein and Arnaut 1988], although the attention demands of devices have not been directly investigated. We hypothesized that the attentional demands of input devices are intricately linked to whether the device matches the input requirements of the on-screen task. Further, matching task and device should be more important for attentionally reduced groups, such as older adults. Younger and older adults used either a direct (touch screen) or indirect (rotary encoder) input device to perform matched or mismatched input tasks under a spectrum of attention allocation conditions. Input devices required attention -- more so for older adults, especially in a mismatch situation. In addition, task performance was influenced by the match between task demands and input device characteristics. Though both groups benefited from a match between input device and task input requirements, older adults benefited more, and this benefit increased as less attention was available. We offer an a priori method to choose an input device for a task by considering the overlap between device attributes and input requirements. This data should affect design decisions concerning input device selection across age groups and task contexts.
Experiences with recombinant computing: Exploring ad hoc interoperability in evolving digital networks BIBAFull-Text 3
  W. Keith Edwards; Mark W. Newman; Jana Z. Sedivy; Trevor F. Smith
This article describes an infrastructure that supports the creation of interoperable systems while requiring only limited prior agreements about the specific forms of communication between these systems. Conceptually, our approach uses a set of "meta-interfaces" -- agreements on how to exchange new behaviors necessary to achieve compatibility at runtime, rather than requiring that communication specifics be built in at development time -- to allow devices on the network to interact with one another. While this approach to interoperability can remove many of the system-imposed constraints that prevent fluid, ad hoc use of devices now, it imposes its own limitations on the user experience of systems that use it. Most importantly, since devices may be expected to work with peers about which they have no detailed semantic knowledge, it is impossible to achieve the sort of tight semantic integration that can be obtained using other approaches today, despite the fact that these other approaches limit interoperability. Instead, under our model, users must be tasked with performing the sense-making and semantic arbitration necessary to determine how any set of devices will be used together. This article describes the motivation and details of our infrastructure, its implications on the user experience, and our experience in creating, deploying, and using applications built with it over a period of several years.
Shifting the focus from accuracy to recallability: A study of informal note-taking on mobile information technologies BIBAFull-Text 4
  Liwei Dai; Andrew Sears; Rich Goldman
Mobile information technologies are theoretically well-suited to digitally accommodate informal note-taking, with the notes often recorded quickly and under less than ideal circumstances. Unfortunately, user adoption of mobile support for informal note-taking has been hindered in large part by slow text entry techniques. Building on research confirming people's ability to recognize erroneous text, this study explores two simple modifications to Graffiti-based text entry with the goal of increasing text entry speed: disabling text correction and disabling visual feedback. As expected, both modifications improved text entry speed at the cost of recognizability. To address the decrease in recognizability, a multiapproach text-enhancement algorithm is introduced with the goal of modifying the erroneous note to facilitate the process of recalling the event or activity that originally motivated the note. A study with 75 participants confirmed that the proposed approach of discouraging user-initiated error correction during note-taking, enhancing the resulting erroneous notes, and facilitating recall with enhanced alternative lists, increased note-taking speed by 47% with no negative impact on the participants' ability to recall important details about the scenarios which prompted the note-taking activities. This research highlights the importance and efficacy of shifting the focus from accuracy to recallability when examining the overall efficacy of informal notes. The proposed modifications and adaptations produce significant benefits and have important implications for how mobile technologies are designed to support both informal note-taking and text entry in general.
The benefits of using a walking interface to navigate virtual environments BIBAFull-Text 5
  Roy A. Ruddle; Simon Lessels
Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications.
The calendar is crucial: Coordination and awareness through the family calendar BIBAFull-Text 6
  Carman Neustaedter; A. J. Bernheim Brush; Saul Greenberg
Everyday family life involves a myriad of mundane activities that need to be planned and coordinated. We describe findings from studies of 44 different families' calendaring routines to understand how to best design technology to support them. We outline how a typology of calendars containing family activities is used by three different types of families -- monocentric, pericentric, and polycentric -- which vary in the level of family involvement in the calendaring process. We describe these family types, the content of family calendars, the ways in which they are extended through annotations and augmentations, and the implications from these findings for design.

TOCHI 2009 Volume 16 Issue 2

Out on the town: A socio-physical approach to the design of a context-aware urban guide BIBAFull-Text 7
  Jeni Paay; Jesper Kjeldskov; Steve Howard; Bharat Dave
As urban environments become increasingly hybridized, mixing the social, built, and digital in interesting ways, designing for computing in the city presents new challenges -- how do we understand such hybridization, and then respond to it as designers? Here we synthesize earlier work in human-computer interaction, sociology and architecture in order to deliberately influence the design of digital systems with an understanding of their built and social context of use. We propose, illustrate, and evaluate a multidisciplinary approach combining rapid ethnography, architectural analysis, design sketching, and paper prototyping. Following the approach we are able to provide empirically grounded representations of the socio-physical context of use, in this case people socializing in urban spaces. We then use this understanding to influence the design of a context aware system to be used while out on the town. We believe that the approach is of value more generally, particularly when achieving powerfully situated interactions is the design ambition.
The ins and outs of home networking: The case for useful and usable domestic networking BIBAFull-Text 8
  Rebecca E. Grinter; W. Keith Edwards; Marshini Chetty; Erika S. Poole; Ja-Young Sung; Jeonghwa Yang; Andy Crabtree; Peter Tolmie; Tom Rodden; Chris Greenhalgh; Steve Benford
Householders are increasingly adopting home networking as a solution to the demands created by the presence of multiple computers, devices, and the desire to access the Internet. However, current network solutions are derived from the world of work (and initially the military) and provide poor support for the needs of the home. We present the key findings to emerge from empirical studies of home networks in the UK and US. The studies reveal two key kinds of work that effective home networking relies upon: one, the technical work of setting up and maintaining the home network, and the other, the collaborative and socially organized work of the home which the network is embedded in and supports. The two are thoroughly intertwined and rely upon one another for their realization, yet neither is adequately supported by current networking technologies and applications. Explication of the "work to make the home network work" opens up the design space for the continued integration of the home network in domestic life and elaboration of future support. Key issues for development include the development of networking facilities that do not require advanced networking knowledge, that are flexible and support the local social order of the home and the evolution of its routines, and which ultimately make the home network visible and accountable to household members.
Rapid prototyping and evaluation of in-vehicle interfaces BIBAFull-Text 9
  Dario D. Salvucci
As driver distraction from in-vehicle devices becomes an increasingly critical issue, researchers have aimed to establish better scientific understanding of distraction along with better engineering tools to build less distracting devices. This article presents a new system, Distract-R, that allows designers to rapidly prototype and evaluate new in-vehicle interfaces. The core engine of the system relies on a rigorous cognitive model of driver behavior which, when integrated with models of task behavior on the prototyped interfaces, generate predictions of driver performance and distraction. Distract-R allows a designer to prototype basic interfaces, demonstrate possible tasks on these interfaces, specify relevant driver characteristics and driving scenarios, and finally simulate, visualize, and analyze the resulting behavior as generated by the cognitive model. The article includes three modeling studies that demonstrate the system's ability to account for various aspects of driver performance for several types of in-vehicle interfaces. More generally, Distract-R illustrates how cognitive models can be used as internal simulation engines for design tools intended for nonmodelers, with the ultimate goal of helping to understand and predict user behavior in multitasking environments.
Activity-based computing for medical work in hospitals BIBAFull-Text 10
  Jakob E. Bardram
Studies have revealed that people organize and think of their work in terms of activities that are carried out in pursuit of some overall objective, often in collaboration with others. Nevertheless, modern computer systems are typically single-user oriented, that is, designed to support individual tasks such as word processing while sitting at a desk. This article presents the concept of Activity-Based Computing (ABC), which seeks to create computational support for human activities. The ABC approach has been designed to address activity-based computing support for clinical work in hospitals. In a hospital, the challenges arising from the management of parallel activities and interruptions are amplified because multitasking is now combined with a high degree of mobility, collaboration, and urgency. The article presents the empirical and theoretical background for activity-based computing, its principles, the Java-based implementation of the ABC Framework, and an experimental evaluation together with a group of hospital clinicians. The article contributes to the growing research on support for human activities, mobility, collaboration, and context-aware computing. The ABC Framework presents a unifying perspective on activity-based support for human-computer interaction.
Kansuke: A logograph look-up interface based on a few modified stroke prototypes BIBAFull-Text 11
  Kumiko Tanaka-Ishii; Julian Godon
We have developed a method that makes it easier for language novices to look up Japanese and Chinese logographs. Instead of using the arbitrary conventions of logographs, this method is based on three simple prototypes: horizontal, vertical, and other strokes. For example, the code for the logograph ⊞ (ta, meaning rice field) is 3-3-0, indicating the logograph consists of three horizontal strokes and three vertical strokes. Such codes allow a novice to look up logographs even with no knowledge of the logographic conventions used by native speakers. To make the search easier, a complex logograph can be looked up via the components making up the logograph. We conducted a user evaluation of this system and found that novices could look up logographs with fewer failures with our system than with conventional methods.

TOCHI 2009 Volume 16 Issue 3

Computer-supported access control BIBAFull-Text 12
  Gunnar Stevens; Volker Wulf
Traditionally, access control is understood as a purely technical mechanism which rejects or accepts access attempts automatically according to a specific preconfiguration. However, such a perspective neglects the practices of access control and the embeddedness of technical mechanisms within situated action. In this article, we reconceptualize the issue of access control on a theoretical, methodological, and practical level. On a theoretical level, we develop a terminology to distinguish between access control practices and the technical support mechanisms. We coin the term Computer Supported Access Control (CSAC) to emphasize this perspective. On a methodological level, we discuss empirical investigations of access control behavior from a situated action perspective. We discovered a differentiated set of social practices around traditional access control systems. By applying these findings to a practical level, we enhance the design space of computer supported access control mechanisms by suggesting a matrix of technical mechanisms which go beyond an ex-ante configuration.
Can direct manipulation lower the barriers to computer programming and promote transfer of training?: An experimental study BIBAFull-Text 13
  Christopher D. Hundhausen; Sean F. Farley; Jonathan L. Brown
Novices face many barriers when learning to program a computer, including the need to learn both a new syntax and a model of computation. By constraining syntax and providing concrete visual representations on which to operate, direct manipulation programming environments can potentially lower these barriers. However, what if the ultimate learning goal of the novice is to be able to program in conventional textual languages, as is the case for introductory computer science students? Can direct manipulation programming environments lower the initial barriers to programming, and, at the same time, facilitate positive transfer to textual programming? To address this question, we designed a new direct manipulation programming interface for novices, and conducted an experimental study to compare the programming processes and outcomes promoted by the direct manipulation interface against those promoted by a textual programming interface. We found that the direct manipulation interface promoted significantly better initial programming outcomes, positive transfer to the textual interface, and significant differences in programming processes. Our results show that direct manipulation interfaces can provide novices with a "way in" to traditional textual programming.
The ModelCraft framework: Capturing freehand annotations and edits to facilitate the 3D model design process using a digital pen BIBAFull-Text 14
  Hyunyoung Song; François Guimbretière; Hod Lipson
Recent advancements in rapid prototyping techniques such as 3D printing and laser cutting are changing the perception of physical 3D models in architecture and industrial design. Physical models are frequently created not only to finalize a project but also to demonstrate an idea in early design stages. For such tasks, models can easily be annotated to capture comments, edits, and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot easily be transferred back to the digital world. Our system, ModelCraft, addresses this problem by augmenting the surface of a model with a traceable pattern. Any sketch drawn on the surface of the model using a digital pen is recovered as part of a digital representation. Sketches can also be interpreted as edit marks that trigger the corresponding operations on the CAD model. ModelCraft supports a wide range of operations on complex models, from editing a model to assembling multiple models, and offers physical tools to capture free-space input. Several interviews and a formal study with the potential users of our system proved the ModelCraft system useful. Our system is inexpensive, requires no tracking infrastructure or per object calibration, and we show how it could be extended seamlessly to use current 3D printing technology.
Unpacking the television: User practices around a changing technology BIBAFull-Text 15
  Louise Barkhuus; Barry Brown
This article investigates the changing television watching practices amongst early adopters of personal hard-disk video recorders (such as Tivo) and Internet downloading of video. Through in-depth interviews with 21 video enthusiasts, we describe how the rhythms of television watching change when decoupled from broadcast TV schedules. Devices such as Tivo do not simply replace videotapes; TV watching becomes more active as programs are gathered from the schedules, played from a stored collection and fast forwarded and paused during playback. Downloads users exploit the Internet to view shows and movies not broadcast, yet this watching is not fundamentally different from recording shows using a PVR, since both involve selection of shows from a limited range and a wait before the shows can be watched.

TOCHI 2009 Volume 16 Issue 4

Introduction to the special issue on UIDL for next-generation user interfaces BIBFull-Text 16
  Orit Shaer; Robert J. K. Jacob; Mark Green; Kris Luyten
Creating a lightweight user interface description language: An overview and analysis of the personal universal controller project BIBAFull-Text 17
  Jeffrey Nichols; Brad A. Myers
Over six years, we iterated on the design of a language for describing the functionality of appliances, such as televisions, telephones, VCRs, and copiers. This language has been used to describe more than thirty diverse appliances, and these descriptions have been used to automatically generate both graphical and speech user interfaces on handheld computers, mobile phones, and desktop computers. In this article, we describe the final design of our language and analyze the key design choices that led to this design. Through this analysis, we hope to provide a useful guide for the designers of future user interface description languages.
ICOs: A model-based user interface description technique dedicated to interactive systems addressing usability, reliability and scalability BIBAFull-Text 18
  David Navarre; Philippe Palanque; Jean-Francois Ladry; Eric Barboni
The design of real-life complex systems calls for advanced software engineering models, methods, and tools in order to meet critical requirements such as reliability, dependability, safety, or resilience that will avoid putting the company, the mission, or even human life at stake. When such systems encompass a substantial interactive component, the same level of confidence is required towards the human-computer interface. Conventional empirical or semiformal techniques, although very fruitful, do not provide sufficient insight on the reliability of the human-system cooperation, and offer no easy way to, for example, quantitatively and qualitatively compare two design options with respect to that reliability. The aim of this article is to present a user interface description language (called ICOs) for the engineering and development of usable and reliable user interfaces. The CASE tool supporting the ICOs notation (called Petshop) is a Petri nets-based-tool for the design, specification, prototyping, and validation of interactive software. In that environment models (built with the formal description technique ICOs) of the interactive application can be interactively modified and executed. This is used to support prototyping phases (when the models and the interactive application evolve significantly to meet late user requirements, for instance) as well as the operation phase (after the system is deployed). The use of ICOs and PetShop is presented on several large-scale systems such as a multimodal ground segment application for satellite control, an air traffic control interactive application, and an application for new generation of interactive cockpits in large civil aircraft such as Airbus A380 or Boeing 787. The article emphasizes the demonstration of the expressive power of the notation and how it can support the description of various aspects of user interfaces, namely interaction techniques (both WIMP and post-WIMP), interactive components (such as widgets), and the behavioral part of interactive applications such as the dialog and the functional core. It also demonstrates that PetShop provides dedicated support for prototyping activities of behavioral aspects at the various levels of the architecture of interactive systems. While the focus is on past work done on various large-scale applications, the article also highlights why and how ICOs and Petshop are able to address challenges raised by next-generation user interfaces.
MARIA: A universal, declarative, multiple abstraction-level language for service-oriented applications in ubiquitous environments BIBAFull-Text 19
  Fabio Paterno; Carmen Santoro; Lucio Davide Spano
One important evolution in software applications is the spread of service-oriented architectures in ubiquitous environments. Such environments are characterized by a wide set of interactive devices, with interactive applications that exploit a number of functionalities developed beforehand and encapsulated in Web services. In this article, we discuss how a novel model-based UIDL can provide useful support both at design and runtime for these types of applications. Web service annotations can also be exploited for providing hints for user interface development at design time. At runtime the language is exploited to support dynamic generation of user interfaces adapted to the different devices at hand during the user interface migration process, which is particularly important in ubiquitous environments.
A specification paradigm for the design and implementation of tangible user interfaces BIBAFull-Text 20
  Orit Shaer; Robert J. K. Jacob
solving, and design. However, tangible user interfaces are currently considered challenging to design and build. Designers and developers of these interfaces encounter several conceptual, methodological, and technical difficulties. Among others, these challenges include: the lack of appropriate interaction abstractions, the shortcomings of current user interface software tools to address continuous and parallel interactions, as well as the excessive effort required to integrate novel input and output technologies. To address these challenges, we propose a specification paradigm for designing and implementing Tangible User Interfaces (TUIs), that enables TUI developers to specify the structure and behavior of a tangible user interface using high-level constructs which abstract away implementation details. An important benefit of this approach, which is based on User Interface Description Language (UIDL) research, is that these specifications could be automatically or semi-automatically converted into concrete TUI implementations. In addition, such specifications could serve as a common ground for investigating both design and implementation concerns by TUI developers from different disciplines.
   Thus, the primary contribution of this article is a high-level UIDL that provides developers from different disciplines means for effectively specifying, discussing, and programming a broad range of tangible user interfaces. There are three distinct elements to this contribution: a visual specification technique that is based on Statecharts and Petri nets, an XML-compliant language that extends this visual specification technique, as well as a proof-of-concept prototype of a Tangible User Interface Management System (TUIMS) that semi-automatically translates high-level specifications into a program controlling specific target technologies.
A natural, tiered and executable UIDL for 3D user interfaces based on Concept-Oriented Design BIBAFull-Text 21
  Chadwick A. Wingrave; Joseph J., Jr. Laviola; Doug A. Bowman
3D User Interface (3DUI) design and development requires practitioners (designers and developers) to represent their ideas in representations designed for machine execution rather than natural representations, hampering development of effective 3DUIs. As such, Concept-Oriented Design (COD) was created as a theory of software development for both natural and executable design and development. Instantiated in the toolkit Chasm, Chasm is a natural, tiered, executable User Interface Description Language (UIDL) for 3DUIs resulting in improved understandability, as well as reduced complexity and reuse. Chasm's utility is shown through evaluations by domain experts, case studies of long-term use, and an analysis of spaces.