HCI Bibliography Home | HCI Journals | About TOCHI | Journal Info | TOCHI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TOCHI Tables of Contents: 010203040506070809101112131415161718192021

ACM Transactions on Computer-Human Interaction 2

Editors:Dan R. Olsen, Jr.
Dates:1995
Volume:2
Publisher:ACM
Standard No:ISSN 1073-0516
Papers:14
Links:Table of Contents
  1. TOCHI 1995 Volume 2 Issue 1
  2. TOCHI 1995 Volume 2 Issue 2
  3. TOCHI 1995 Volume 2 Issue 3
  4. TOCHI 1995 Volume 2 Issue 4

TOCHI 1995 Volume 2 Issue 1

Research Contributions

Coupling the User Interfaces of a Multiuser Program BIBAKPDF 1-39
  Prasun Dewan; Rajiv Choudhary
We have developed a new model for coupling the user interfaces of a multiuser program. It is based on an interaction model and a user interface framework that allow users and programmers, respectively, to view applications as editors of data. It consists of a semantics model, a specification model, and an implementation model for coupling. The semantics model determines (1) which properties of interaction entities created for a user are shared with corresponding interaction entities created for other users and (2) when changes made by a user to a property of an interaction entity are communicated to other users sharing it. It divides the properties of an interaction entity into multiple coupling sets and allows users to share different coupling sets independently. It supports several criteria for choosing when a change made by a user to a shared property is communicated to other users. These criteria include how structurally complete the change is, how correct it is, and the time at which it was made. The specification model determines how users specify the desired semantics of coupling. It associates interaction entities with inheritable coupling attributes, allows multiple users to specify values of these attributes, and does a runtime matching of the coupling attributes specified by different users to derive the coupling among their user interfaces. The implementation model determines how multiuser programs implement user-customizable coupling. It divides the task of implementing the coupling between system-provided modules and application programs. The modules support automatically a predefined semantics and specification model that can be extended by the programs. We have implemented the coupling model as part of a system called Suite. This paper describes and motivates the model using the concrete example of Suite, discusses how aspects of it can be implemented in other systems, compares it with related work, discusses its shortcomings, and suggests directions for future work.
Keywords: Computer-communication networks, Distributed systems, Distributed applications, Distributed databases, Software engineering, Tools and techniques, User interfaces, Software engineering, Programming environments, Interactive Programming languages, Language constructs, Input/output, Models and principles, User/machine systems, Human factors, Information systems applications, Office automation, Text processing, Text editing, Design, Human factors, Languages, Collaboration, Computer-supported cooperative work, Groupware, Structure editors, User interface management systems
Developing a Reflective Model of Collaborative Systems BIBAKPDF 40-63
  Paul Dourish
Recent years have seen a shift in perception of the nature of HCI and interactive systems. As interface work has increasingly become a focus of attention for the social sciences, we have expanded our appreciation of the importance of issues such as work practice, adaptation, and evolution in interactive systems. The reorientation in our view of interactive systems has been accompanied by a call for a new model of design centered around user needs and participation. This article argues that a new process of design is not enough and that the new view necessitates a similar reorientation in the structure of the systems we build. It outlines some requirements for systems that support a deeper conception of interaction and argues that the traditional system design techniques are not suited to creating such systems. Finally, using examples from ongoing work in the design of an open toolkit for collaborative applications, it illustrates how the principles of computational reflection and metaobject protocols can lead us toward a new model based on open abstraction that holds great promise in addressing these issues.
Keywords: Computer-communication networks, Distributed systems, Distributed applications, Software engineering, Tools and techniques, User interfaces, Software engineering, Design, Methodologies, Models and principles, General, Design, Collaborative applications, Computational reflection, Meta-object protocol, Open implementations, System architecture
User Interface Software Tools BIBAKPDF 64-103
  Brad A. Myers
Almost as long as there have been user interfaces, there have been special software systems and tools to help design and implement the user interface software. Many of these tools have demonstrated significant productivity gains for programmers, and have become important commercial products. Others have proven less successful at supporting the kinds of user interfaces people want to build. This article discusses the different kinds of user interface software tools, and investigates why some approaches have worked and others have not. Many examples of commercial and research systems are included. Finally, current research directions and open issues in the field are discussed.
Keywords: Software engineering, Tools and techniques, User interfaces, Models and principles, User/machine systems, Human factors, Information interfaces and presentation, User interfaces, User interface management systems, Artificial intelligence, Automatic programming, Program synthesis, Human factors, Languages, Interface builders, Toolkits, User interface development environments, User interface software

TOCHI 1995 Volume 2 Issue 2

Research Contributions

Chiron-1: A Software Architecture for User Interface Development, Maintenance, and Run-Time Support BIBAKPDF 105-144
  Richard N. Taylor; Kari A. Nies; Gregory Alan Bolcer; Craig A. MacFarlane; Kenneth M. Anderson; Gregory F. Johnson
The Chiron-1 user interface system demonstrates key techniques that enable a strict separation of an application from its user interface. These techniques include separating the control-flow aspects of the application and user interface: they are concurrent and may contain many threads. Chiron also separates windowing and look-and-feel issues from dialogue and abstract presentation decisions via mechanisms employing a client-server architecture. To separate application code from user interface code, user interface agents called artists are attached to instances of application abstract data types (ADTs). Operations on ADTs within the application implicitly trigger user interface activities within the artists. Multiple artists can be attached to ADTs, providing multiple views and alternative forms of access and manipulation by either a single user or by multiple users. Each artist and the application run in separate threads of control. Artists maintain the user interface by making remote calls to an abstract depiction hierarchy in the Chiron server, insulating the user interface code from the specifics of particular windowing systems and toolkits. The Chiron server and clients execute in separate processes. The client-server architecture also supports multilingual systems: mechanisms are demonstrated that support clients written in programming languages other than that of the server, while nevertheless supporting object-oriented server concepts. The system has been used in several universities and research and development projects. It is available by anonymous ftp.
Keywords: Software engineering, Tools and techniques, User interfaces, Software engineering, Miscellaneous, Reusable software, Information interfaces and presentation, User interfaces, User interface management systems (UIMS), Design, Languages, Artists, Client-server, Concurrency, Event-based integration, User interface architectures
Relief from the Audio Interface Blues: Expanding the Spectrum of Menu, List, and Form Styles BIBAKPDF 145-176
  Paul Resnick; Robert A. Virzi
Menus, lists, and forms are the workhorse dialogue structures in telephone-based interactive voice response applications. Despite diversity in applications, there is a surprising homogeneity in the menu, list, and form styles commonly employed. There are, however, many alternatives, and no single style fits every prospective application and user population. A design space for each dialogue structure organizes the alternatives and provides a framework for analyzing their benefits and drawbacks. In addition to phone-based interactions, the design spaces apply to any limited-bandwidth, temporally constrained display devices, including small-screen devices such as personal digital assistants (PDAs) and screen phones.
Keywords: Information interfaces and presentation, Multimedia information systems, Audio input/output, Information interfaces and presentation, User interfaces, Interaction styles, Human factors, ADSI, Forms, Interactive voice response (IVR), Menus, PDA, Skip and scan, Voice mail

TOCHI 1995 Volume 2 Issue 3

Special Issue on Virtual Reality Software and Technology

Introduction to the Special Issue on Virtual Reality Software and Technology BIBPDF 177-178
  Gurminder Singh; Steven K. Feiner
An Approach to Natural Gesture in Virtual Environments BIBAKPDF 179-200
  Alan Wexelblat
This article presents research -- an experiment and the resulting prototype -- on a method for treating gestural input so that it can be used for multimodal applications, such as interacting with virtual environments. This method involves the capture and use of natural, empty-hand gestures that are made during conventional descriptive utterances. Users are allowed to gesture in a normal continuous manner, rather than being restricted to a small set of discrete gestural commands as in most other systems. The gestures are captured and analyzed into a higher-level description. This description can be used by an application-specific interpreter to understand the gestural input in its proper context. Having a gesture analyzer of this sort enables natural gesture input to any appropriate application.
Keywords: Information systems, User/machine systems, Human information processing, Information interfaces and presentation, Multimedia information systems, Artificial realities, Information interfaces and presentation, User interfaces, Evaluation/methodology, Input devices and strategies, Interaction styles Design, Experimentation, Human factors, Performance, Gesture, Input methods, Multimodal, Natural interaction
Taking Steps: The Influence of a Walking Technique on Presence in Virtual Reality BIBAKPDF 201-219
  Mel Slater; Martin Usoh; Anthony Steed
This article presents an interactive technique for moving through an immersive virtual environment (or "virtual reality"). The technique is suitable for applications where locomotion is restricted to ground level. The technique is derived from the idea that presence in virtual environments may be enhanced the stronger the match between proprioceptive information from human body movements and sensory feedback from the computer-generated displays. The technique is an attempt to simulate body movements associated with walking. The participant "walks in place" to move through the virtual environment across distances greater than the physical limitations imposed by the electromagnetic tracking devices. A neural network is used to analyze the stream of coordinates from the head-mounted display, to determine whether or not the participant is walking on the spot. Whenever it determines the walking behavior, the participant is moved through virtual space in the direction of his or her gaze. We discuss two experimental studies to assess the impact on presence of this method in comparison to the usual hand-pointing method of navigation in virtual reality. The studies suggest that subjective rating of presence is enhanced by the walking method provided that participants associate subjectively with the virtual body provided in the environment. An application of the technique to climbing steps and ladders is also presented.
Keywords: Models and principles, User/machine systems, Information interfaces and presentation, Multimedia information systems, Artificial realities, Information interfaces and presentation, User interfaces, Computer graphics, Graphics utilities, Virtual device interfaces, Computer graphics, Three-dimensional graphics and realism, Virtual reality, Experimentation, Human factors, Immersion, Locomotion, Navigation, Neural networks, Presence, Virtual environments, Virtual reality
HoloSketch: A Virtual Reality Sketching/Animation Tool BIBAKPDF 220-238
  Michael F. Deering
This article describes HoloSketch, a virtual reality-based 3D geometry creation and manipulation tool. HoloSketch is aimed at providing nonprogrammers with an easy-to-use 3D "What-You-See-Is-What-You-Get" environment. Using head-tracked stereo shutter glasses and a desktop CRT display configuration, virtual objects can be created with a 3D wand manipulator directly in front of the user, at very high accuracy and much more rapidly than with traditional 3D drawing systems. HoloSketch also supports simple animation and audio control for virtual objects. This article describes the functions of the HoloSketch system, as well as our experience so far with more-general issues of head-tracked stereo 3D user interface design.
Keywords: Computer graphics, Picture/image generation, Display algorithms, Computer graphics, Three-dimensional graphics and realism, Human factors, 3D animation, 3D graphics, CAD, Graphics drawing systems, Graphics painting systems, Man-machine interface, Virtual reality
MASSIVE: A Collaborative Virtual Environment for Teleconferencing BIBAKPDF 239-261
  Chris Greenhalgh; Steven Benford
We describe a prototype virtual reality teleconferencing system called MASSIVE which has been developed as part of our on-going research into collaborative virtual environments. This system allows multiple users to communicate using arbitrary combinations of audio, graphics, and text media over local and wide area networks. Communication is controlled by a so-called spatial model of interaction so that one user's perception of another user is sensitive to their relative positions and orientations. The key concept in this spatial model is the (quantitative) awareness which one object has of another. This is controlled by the observing object's focus and the observed object's nimbus, which describe regions of interest and projection, respectively. Each object's aura defines the total region within which it interacts. This is applied independently in each medium. The system (and the spatial model which it implements) is intended to provide a flexible and natural environment for the spatial mediation of conversation. The model also provides a basis for scaling to relatively large numbers of users. Our design goals include supporting heterogeneity, scalability, spatial mediation, balance of power, and multiple concurrent meetings; MASSIVE meets all of these goals. Our initial experiences show the importance of audio in collaborative VR, and they raise issues about field of view for graphical users, speed of navigation, quality of embodiment, varying perceptions of space, and scalability.
Keywords: Computer-communication networks, Distributed systems, Models and principles, User/machine systems, Information systems applications, Communications applications, Computer conferencing and teleconferencing, Information interfaces and presentation, Multimedia information systems, Artificial realities, Audio input/output, Information interfaces and presentation, User interfaces, Interaction styles, Theory and methods, Information interfaces and presentation, Group and organizational interfaces, Synchronous interaction, Theory and models, Computer graphics, Three-dimensional graphics and realism, Virtual reality, Design, Experimentation, Human factors, Performance, Theory, CSCW, Scalability

TOCHI 1995 Volume 2 Issue 4

Research Contributions

Evaluation of the CyberGlove as a Whole-Hand Input Device BIBAKPDF 263-283
  G. Drew Kessler; Larry F. Hodges; Neff Walker
We present a careful evaluation of the sensory characteristics of the CyberGlove model CG1801 whole-hand input device. In particular, we conducted an experimental study that investigated the level of sensitivity of the sensors, their performance in recognizing angles, and factors that affected accuracy of recognition of flexion measurements. Among our results, we show that hand size differences among the subjects of the study did not have a statistical effect on the accuracy of the device. We also analyzed the effect of different software calibration approaches on accuracy of the sensors.
Keywords: Input / output and data communications, Input/output devices, Input / output and data communications, Performance analysis and design aids, Information interfaces and presentation, User interfaces, Input devices and strategies, Experimentation, Human factors, Measurement, CyberGlove, Device evaluation, Hand input, Input devices
Development and Evaluation of Hypermedia for Museum Education: Validation of Metrics BIBAKPDF 284-307
  Shoji Yamada; Jung-Kook Hong; Shigeharu Sugita
To define a hypermedia system's ease of use from the user's point of view, we propose three evaluation metrics: an interface shallowness metric, a downward compactness metric, and a downward navigability metric. These express both the cognitive load on users and the structural complexity of the hypermedia contents. We conducted a field study at the National Museum of Ethnology (NME) in Osaka, Japan, to evaluate our hypermedia system and to assess the suitability of our hypermedia metrics from the viewpoint of visiting members of the public. After developing a spreadsheet-type authoring system named HyperEX, wc built prototype systems for use by members of the public visiting a special exhibition held at the museum. Questionnaires, interviews, automatic recording of users' navigation operations, and statistical analysis of 449 tested users yielded the following results. First, the suitability of the metrics was found to be satisfactory, indicating that they are useful for developing hypermedia systems. Second, there is a strong relationship between a system's enjoyability and its usability. Transparency and the friendliness of the user interface are the key issues in enjoyability. Finally, the quality of the video strongly affects the overall system evaluation. Video quality is determined by optimum selection of scenes, the length of the video, and appropriate audio-visual expression of the content. This video quality may become the most-important issue in developing hypermedia for museum education.
Keywords: Information interfaces and presentation, Multimedia information systems, Evaluation/methodology, Hypertext navigation and maps, Information interfaces and presentation, User interfaces, Evaluation/methodology, Interaction styles, Computer applications, Miscellaneous, Design, Documentation, Human factors, Measurement, Field study, Graph theory, Metrics, Museum, Structural analysis
Note: Corrigendum on this item in Vol. 3, No. 3. p. 285 http://www.acm.org/pubs/articles/journals/tochi/1996-3-3/p285-yamada/p285-yamada.pdf
Demonstrational and Constraint-Based Techniques for Pictorially Specifying Application Objects and Behaviors BIBAKPDF 308-356
  Brad Vander Zanden; Brad A. Myers
The Lapidary interface design tool is a demonstrational system that allows the graphics and run-time behaviors that go inside an application window to be specified pictorially. In particular, Lapidary allows the designer to draw example pictures of application-specific graphical objects that the end user will manipulate (such as boxes, arrows, or elements of a list), the feedback that shows which objects are selected (such as small boxes on the sides and corners of an object), and the dynamic feedback objects (such as hairline boxes to show where an object is being dragged). The run-time behavior of all these objects can be specified in a straightforward way using constraints, demonstration, and dialog boxes that allow the designer to provide abstract descriptions of the interactive response to the input devices. Lapidary generalizes from these specific example pictures and behaviors to create prototype objects and behaviors from which instances can be made at run-time. A novel feature of Lapidary's implementation is its use of constraints that have been explicitly specified by the designer to help it generalize example objects and behaviors and to guide it in making inferences.
Keywords: Software engineering, Tools and techniques, User interfaces, Computer graphics, Methodology and techniques, Human factors, Direct manipulation, Interaction, Interaction techniques, Object-oriented design, Programming by example, User interface management systems
Internal Representation and Rule Development in Object-Oriented Design BIBAKPDF 357-390
  Jinwoo Kim; F. Javier Lerch; Herbert A. Simon
This article proposes a cognitive framework describing the software development process in object-oriented design (OOD) as building internal representations and developing rules. Rule development (method construction) is performed in two problem spaces: a rule space and an instance space. Rules are generated, refined, and evaluated in the rule space by using three main cognitive operations: Infer, Derive, and Evoke. Cognitive activities in the instance space are called mental simulations and are used in conjunction with the Infer operation in the rule space. In an empirical study with college students, we induced different representations to the same problem by using problem isomorphs. Initially, subjects built a representation based on the problem description. As rule development proceeded, the initial internal representation and designed objects were refined, or changed if necessary, to correspond to knowledge gained during rule development. Differences in rule development processes among groups created final designs that are radically different in terms of their level of abstraction and potential reusability. The article concludes by discussing the implications of these results for object-oriented design.
Keywords: Programming techniques, Object-oriented programming, Software engineering, Design, Representation, Programming languages, Language classifications, Object-oriented languages, Human factors, Design, Experimentation, Internal representation, Object-oriented design, Rule development