| Display Navigation by an Expert Programmer: A Preliminary Model of Memory | | BIBAK | HTML | 3-10 | |
| Erik M. Altmann; Jill H. Larkin; Bonnie E. John | |||
| Skilled programmers, working on natural tasks, navigate large information
displays with apparent ease. We present a computational cognitive model
suggesting how this navigation may be achieved. We trace the model on two
related episodes of behavior. In the first, the user acquires information from
the display. In the second, she recalls something about the first display and
scrolls back to it. The episodes are separated by time and by intervening
displays, suggesting that her navigation is mediated by long-term memory, as
well as working memory and the display. In the first episode, the model
automatically learns to recognize what it sees on the display. In the second
episode, a chain of recollections, cued initially by the new display, leads the
model to imagine what it might have seen earlier. The knowledge from the first
episode recognizes this image, leading the model to scroll in search of the
real thing. This model is a step in developing a psychology of skilled
programmers working on their own tasks. Keywords: Psychology of programming, User models, Expert programmers, Display
navigation, Program comprehension, Memory, Learning, Soar | |||
| Predictive Engineering Models Using the EPIC Architecture for a High-Performance Task | | BIBA | HTML | 11-18 | |
| David E. Kieras; Scott D. Wood; David E. Meyer | |||
| Engineering models of human performance permit some aspects of usability of interface designs to be predicted from an analysis of the task, and thus can replace to some extent expensive user testing data. Human performance in telephone operator tasks was successfully predicted using engineering models constructed in the EPIC (Executive Process-Interactive Control) architecture for human information-processing, which is especially suited for modeling multimodal, complex tasks. Several models were constructed on an a priori basis to represent different hypotheses about how users coordinate their activities to produce rapid task performance. All of the models predicted the total task time with useful accuracy, and clarified some important properties of the task. | |||
| Modeling Time-Constrained Learning in a Highly Interactive Task | | BIBAK | HTML | 19-26 | |
| Malcolm I. Bauer; Bonnie E. John | |||
| We investigate whether a memory-based learning procedure can explain the
development of expertise within the time-constraints of a fast-paced highly
interactive task. Our computational cognitive model begins with novice-like
knowledge of a domain, and through experience converges on behavior that
matches a pre-existing GOMS model of expert human performance. The model
coordinates perception, comprehension, strategic planning, learning, memory,
and motor action to respond to the time demands of the task while continually
improving its performance. Because the model was constructed within the Soar
architecture, it is able to make predictions of learning and performance time. Keywords: Learning, GOMS, Soar, Cognitive models | |||
| KidSim: End User Programming of Simulations | | BIBAK | HTML | 27-34 | |
| Allen Cypher; David Canfield Smith | |||
| KidSim is an environment that allows children to create their own
simulations. They create their own characters, and they create rules that
specify how the characters are to behave and interact. KidSim is programmed by
demonstration, so that users do not need to learn a conventional programming
language or scripting language. Informal user studies have shown that children
are able to create simulations in KidSim with a minimum of instruction, and
that KidSim stimulates their imagination. Keywords: End user programming, Simulations, Programming by demonstration, Graphical
rewrite rules, Production systems, Programming by example, User programming | |||
| Building Geometry-Based Widgets by Example | | BIBAK | HTML | 35-42 | |
| Dan R., Jr. Olsen; Brett Ahlstrom; Douglas Kohlert | |||
| Algorithms are presented for creating new widgets by example. The basic
model is one of an editable picture which can be mapped to control information.
The mappings are learned from examples. The set of possible maps is readily
extensible. Keywords: Widgets, Demonstrational interfaces, Toolkit builder, User interface
software | |||
| Interactive Sketching for the Early Stages of User Interface Design | | BIBAK | HTML | 43-50 | |
| James A. Landay; Brad A. Myers | |||
| Current interactive user interface construction tools are often more of a
hindrance than a benefit during the early stages of user interface design.
These tools take too much time to use and force designers to specify more of
the design details than they wish at this early stage. Most interface
designers, especially those who have a background in graphic design, prefer to
sketch early interface ideas on paper or on a whiteboard. We are developing an
interactive tool called SILK that allows designers to quickly sketch an
interface using an electronic pad and stylus. SILK preserves the important
properties of pencil and paper: a rough drawing can be produced very quickly
and the medium is very flexible. However, unlike a paper sketch, this
electronic sketch is interactive and can easily be modified. In addition, our
system allows designers to examine, annotate, and edit a complete history of
the design. When the designer is satisfied with this early prototype, SILK can
transform the sketch into a complete, operational interface in a specified
look-and-feel. This transformation is guided by the designer. By supporting
the early phases of the interface design life cycle, our tool should both ease
the development of user interface prototypes and reduce the time needed to
create a final interface. This paper describes our prototype and provides
design ideas for a production-level system. Keywords: User interfaces, Design, Sketching, Gesture recognition, Interaction
techniques, Programming-by-demonstration, Pen-based computing, Garnet, SILK | |||
| Information Foraging in Information Access Environments | | BIBAK | HTML | 51-58 | |
| Peter Pirolli; Stuart Card | |||
| Information foraging theory is an approach to the analysis of human
activities involving information access technologies. The theory derives from
optimal foraging theory in biology and anthropology, which analyzes the
adaptive value of food-foraging strategies. Information foraging theory
analyzes trade-offs in the value of information gained against the costs of
performing activity in human-computer interaction tasks. The theory is
illustrated by application to information-seeking tasks involving a
Scatter/Gather interface, which presents users with a navigable, automatically
computed, overview of the contents of a document collection arranged as a
cluster hierarchy. Keywords: Information foraging theory, Information access | |||
| TileBars: Visualization of Term Distribution Information in Full Text Information Access | | BIBA | HTML | 59-66 | |
| Marti A. Hearst | |||
| The field of information retrieval has traditionally focused on textbases consisting of titles and abstracts. As a consequence, many underlying assumptions must be altered for retrieval from full-length text collections. This paper argues for making use of text structure when retrieving from full text documents, and presents a visualization paradigm, called TileBars, that demonstrates the usefulness of explicit term distribution information in Boolean-type queries. TileBars simultaneously and compactly indicate relative document length, query term frequency, and query term distribution. The patterns in a column of TileBars can be quickly scanned and deciphered, aiding users in making judgments about the potential relevance of the retrieved documents. | |||
| An Organic User Interface for Searching Citation Links | | BIBAK | HTML | 67-73 | |
| Jock D. Mackinlay; Ramana Rao; Stuart K. Card | |||
| This paper describes Butterfly, an Information Visualizer application for
accessing DIALOG's Science Citation databases across the Internet. Network
information often involves slow access that conflicts with the use of
highly-interactive information visualization. Butterfly addresses this
problem, integrating search, browsing, and access management via four
techniques: 1) visualization supports the assimilation of retrieved information
and integrates search and browsing activity, 2) automatically-created
"link-generating" queries assemble bibliographic records that contain reference
information into citation graphs, 3) asynchronous query processes explore the
resulting graphs for the user, and 4) process controllers allow the user to
manage these processes. We use our positive experience with the Butterfly
implementation to propose a general information access approach, called Organic
User Interfaces for Information Access, in which a virtual landscape grows
under user control as information is accessed automatically. Keywords: Information visualization, Search, Browsing, Access management, Information
retrieval, Organic user interfaces, Data fusion, Hypertext, Citation graphs | |||
| End-User Training: An Empirical Study Comparing On-Line Practice Methods | | BIBAK | HTML | 74-81 | |
| Susan Wiedenbeck; Patti L. Zila; Daniel S. McConnell | |||
| An empirical study was carried out comparing three kinds of hands-on
practice in training users of a software package: exercises,
guided-exploration, and a combination of exercises and guided-exploration.
Moderate to high experience computer users were trained. Subjects who were
trained with exercises or the combined approach did significantly better in
both time and errors than those trained using guided-exploration. There were
no significant differences between the exercise and the combined approach
groups. Thus, it appears that the better performance of these groups can be
attributed to the exercise component of their practice. Keywords: Training, Practice methods, Exercises, Guided-exploration, Minimal manual,
End-users, Tutorials | |||
| A Comparison of Still, Animated, or Nonillustrated On-Line Help with Written or Spoken Instructions in a Graphical User Interface | | BIBAK | HTML | 82-89 | |
| Susan M. Harrison | |||
| Current forms of on-line help do not adequately reflect the graphical and
dynamic nature of modern graphical user interfaces. Many of today's software
applications provide text-based on-line help to assist users in performing a
specific task. This report describes a study in which 176 undergraduates
received on-line help instructions for completing seven computer-based tasks.
Instructions were provided in either written or spoken form with or without
still graphic or animated visuals. Results consistently revealed that visuals,
either still graphic or animated, in the on-line help instructions enabled the
users to significantly perform more tasks in less time and with fewer errors
than did users who did not have visuals accompanying the on-line help
instructions. Although users receiving spoken instructions were faster and
more accurate for the initial set of tasks than were users receiving written
instructions, the majority of subjects preferred written instructions over
spoken instructions. The results of this study suggest additional
empirically-based guidelines to designers for the development of effective
on-line help. Keywords: Graphical user interfaces, On-line help, Visuals, User interface components | |||
| Dynamic Generation of Follow Up Question Menus: Facilitating Interactive Natural Language Dialogues | | BIBAK | HTML | 90-97 | |
| Vibhu O. Mittal; Johanna D. Moore | |||
| Most complex systems provide some form of help facilities. However,
typically, such help facilities do not allow users to ask follow up questions
or request further elaborations when they are not satisfied with the systems'
initial offering. One approach to alleviating this problem is to present the
user with a menu of possible follow up questions at every point. Limiting
follow up information requests to choices in a menu has many advantages, but
there are also a number of issues that must be dealt with in designing such a
system. To dynamically generate useful embedded menus, the system must be able
to, among other things, determine the context of the request, represent and
reason about the explanations presented to the user, and limit the number of
choices presented in the menu. This paper discusses such issues in the context
of a patient education system that generates a natural language description in
which the text is directly manipulable -- clicking on portions of the text
causes the system to generate menus that can be used to request elaborations
and further information. Keywords: Hyper-media, Natural language, Intelligent systems, User interface
components, Usability engineering | |||
| A Generic Platform for Addressing the Multimodal Challenge | | BIBAK | HTML | 98-105 | |
| Laurence Nigay; Joelle Coutaz | |||
| Multimodal interactive systems support multiple interaction techniques such
as the synergistic use of speech and direct manipulation. The flexibility they
offer results in an increased complexity that current software tools do not
address appropriately. One of the emerging technical problems in multimodal
interaction is concerned with the fusion of information produced through
distinct interaction techniques. In this article, we present a generic fusion
engine that can be embedded in a multi-agent architecture modelling technique.
We demonstrate the fruitful symbiosis of our fusion mechanism with PAC-Amodeus,
our agent-based conceptual model, and illustrate the applicability of the
approach with the implementation of an effective interactive system: MATIS, a
Multimodal Airline Travel Information System. Keywords: Multimodal interactive systems, Software design, Software architecture, I/O
devices, Interaction languages, Data fusion | |||
| Developing Dual Interfaces for Integrating Blind and Sighted Users: The HOMER UIMS | | BIBAK | HTML | 106-113 | |
| Anthony Savidis; Constantine Stephanidis | |||
| Existing systems which enable the accessibility of Graphical User Interfaces
to blind people follow an "adaptation strategy"; each system adopts its own
fixed policy for reproducing visual dialogues to a non-visual form, without
knowledge about the application domain or particular dialogue characteristics.
It is argued that non-visual User Interfaces should be more than automatically
generated adaptations of visual dialogues. Tools are required to facilitate
non-visual interface construction, which should allow iterative design and
implementation (not supported by adaptation methods). There is a need for
"integrated" User Interfaces which are concurrently accessible by both sighted
and blind users in order to prevent segregation of blind people in their
working environment. The concept of Dual User Interfaces is introduced as the
most appropriate basis to address this issue. A User Interface Management
System has been developed, called HOMER, which facilitates the development of
Dual User Interfaces. HOMER supports the integration of visual and non-visual
lexical technologies. In this context, a simple toolkit has been also
implemented for building non-visual User Interfaces and has been incorporated
in the HOMER system. Keywords: UIMS, Aids for the impaired, Programming environments | |||
| Improving GUI Accessibility for People with Low Vision | | BIBAK | HTML | 114-121 | |
| Richard L. Kline; Ephraim P. Glinert | |||
| We present UnWindows V1, a set of tools designed to assist low vision users
of X Windows in effectively accomplishing two mundane yet critical interaction
tasks: selectively magnifying areas of the screen so that the contents can be
seen comfortably, and keeping track of the location of the mouse pointer. We
describe our software from both the end user's and implementor's points of
view, with particular emphasis on issues related to screen magnification
techniques. We conclude with details regarding software availability and plans
for future extensions. Keywords: Workstation interfaces, Assistive technology, Low vision, Screen
magnification, X window system | |||
| Collaborative Tools and the Practicalities of Professional Work at the International Monetary Fund | | BIBAK | HTML | 122-129 | |
| Richard Harper; Abigail Sellen | |||
| We show how an ethnographic examination of the International Monetary Fund
in Washington, D.C. has implications for the design of tools to support
collaborative work. First, it reports how information that requires a high
degree of professional judgement in its production is unsuited for most current
groupware tools. This is contrasted with the shareability of information which
can 'stand-alone'. Second, it reports how effective re-use of documents will
necessarily involve paper, or 'paper-like' equivalents. Both issues emphasise
the need to take into account social processes in the sharing of certain kinds
of information. Keywords: CSCW, Work practice, Ethnography, Paper documents, Groupware, Professional
work, International Monetary Fund | |||
| Telephone Operators as Knowledge Workers: Consultants Who Meet Customer Needs | | BIBAK | HTML | 130-137 | |
| Michael J. Muller; Rebecca Carr; Catherine Ashworth; Barbara Diekmann; Cathleen Wharton; Cherie Eickstaedt; Joan Clonts | |||
| We present two large studies and one case study that make a strong case for
considering telephone operators as knowledge workers. We describe a
quantitative analysis of the diversity of operators' knowledge work, and of how
their knowledge work coordinates with the subtle resources contained within
customers' requests. Operators engage in collaborative query refinement with
customers, exhibiting a rich set of skilled performances. Earlier reports
characterized the operators' role as an intermediary between customer and
database. In contrast, we focus on operator's consultative work in which they
use computer systems as one type of support for their primarily cognitive
activities. Our results suggest that knowledge work may be a subtle feature of
many jobs, not only those that are labeled as such. Our methodology may be
useful for the analysis of other domains involving skilled workers. Keywords: Telephone operators, Knowledge work, Expertise, Skilled performance,
Participatory design, Participatory analysis | |||
| Ethics, Lies and Videotape... | | BIBAK | HTML | 138-145 | |
| Wendy E. Mackay | |||
| Videotape has become one of the CHI community's most useful technologies: it
allows us to analyze users' interactions with computers, prototype new
interfaces, and present the results of our research and technical innovations
to others. But video is a double-edged sword. It is often misused, however
unintentionally. How can we use it well, without compromising our integrity?
This paper presents actual examples of questionable videotaping practices. Next, it explains why we cannot simply borrow ethical guidelines from other professions. It concludes with a proposal for developing usable ethical guidelines for the capture, analysis and presentation of video. Keywords: HCI professional issues, Video editing, Ethics, Social computing | |||
| Multidisciplinary Modeling in HCI Design ...In Theory and in Practice | | BIBAK | HTML | 146-153 | |
| Victoria Bellotti; Simon Buckingham Shum; Allan MacLean; Nick Hammond | |||
| In one of the largest multidisciplinary projects in basic HCI research to
date, multiple analytic HCI techniques were combined and applied within an
innovative design context to problems identified by designers of an AV
communication system, or media space. The problems were presented to user-,
system- and design-analysts distributed across Europe. The results of analyses
were integrated and passed back to the designers, and to other domain experts,
for assessment. The aim of this paper is to illustrate some theory-based
insights gained into key problems in media space design and to convey lessons
learned about the process of contributing to design using multiple theoretical
perspectives. We also describe some obstacles which must be overcome if such
techniques are to be transferred successfully to practice. Keywords: Theory, Cognitive modelling, Formal methods, Design practice, Argumentation,
Design rationale, Media spaces, Multidisciplinary | |||
| Design Space Analysis as "Training Wheels" in a Framework for Learning User Interface Design | | BIBAK | HTML | 154-161 | |
| J. W. van Aalst; T. T. Carey; D. L. McKerlie | |||
| Learning about design is a central component in education for human-computer
interaction. We have found Design Space Analysis to be a useful technique for
students learning user interface design skills. In the FLUID tool described
here, we have combined explicit instruction on design, worked case studies, and
problem exercises for learners, yielding an interactive multimedia system to be
incorporated into an HCI design course. FLUID is intended as a "training
wheels" for learning user interface design. In this paper, we address the
question of how this form of teaching might mediate and extend the learning
process and we present our observations on Design Space Analysis as a training
wheels aid for learning user interface design. Keywords: HCI education, Design space analysis, Design rationale, Design skills,
Interactive multimedia | |||
| Practical Education for Improving Software Usability | | BIBAK | HTML | 162-169 | |
| John Karat; Tom Dayton | |||
| A usable software system is one that supports the effective and efficient
completion of tasks in a given work context. In most cases of the design and
development of commercial software, usability is not dealt with at the same
level as other aspects of software engineering (e.g., clear usability
objectives are not set, resources for appropriate activities are not given
priority by project management). One common consequence is the assignment of
responsibility for usability to people who do not have appropriate training, or
who are trained in behavioral sciences rather than in more product-oriented
fields such as design or engineering. Relying on our experiences in industrial
settings, we make personal suggestions of activities for the realistic and
practical alternative of training development team members as usability
advocates. Our suggestions help meet the needs specified in the recent Strong
et al. [21] report on human-computer interaction education, research, and
practice. Keywords: HCI education, Technology transfer, Participatory design, User-centered
design, Usability engineering, Design problem-solving | |||
| Evolution of a Reactive Environment | | BIBAK | HTML | 170-177 | |
| Jeremy R. Cooperstock; Koichiro Tanikoshi; Garry Beirne; Tracy Narine; William Buxton | |||
| A basic tenet of "Ubiquitous computing" (Weiser, 1993 [13]) is that
technology should be distributed in the environment (ubiquitous), yet
invisible, or transparent. In practice, resolving the seeming paradox arising
from the joint demands of ubiquity and transparency is less than simple. This
paper documents a case study of attempting to do just that. We describe our
experience in developing a working conference room which is equipped to support
a broad class of meetings and media. After laying the groundwork and
establishing the context in the Introduction, we describe the evolution of the
room. Throughout, we attempt to document the rationale and motivation. While
derived from a limited domain, we believe that the issues that arise are of
general importance, and have strong implications on future research. Keywords: Case studies, CSCW, Intelligent systems, Reactive environments, Home
automation, Design rationale, Office applications | |||
| The High-Tech Toolbelt: A Study of Designers in the Workplace | | BIBAK | HTML | 178-185 | |
| Tamara Sumner | |||
| Many design professionals assemble collections of off-the-shelf software
applications into toolbelts to perform their job. These designers use several
different tools to create a variety of design representations. This case study
shows how designers evolve initially generic toolbelts through a process of
domain-enriching to make their own domain-specific design environments.
Comparing this practice with theoretical findings concerning design processes
highlights the benefits and limitations of this toolbelt approach. A key
benefit is its flexible support for creating and evolving multiple design
representations. A key limitation is how it hinders iterative design by making
it difficult for designers to maintain consistency across the different design
representations. This limitation could be remedied if tools could be extended
or "tuned" to support the observed domain-enriching process. Such tuning would
enable designers to extend tools during use to: (1) support important domain
distinctions and (2) define dependencies between different design
representations based on these domain distinctions. Keywords: Design, Design environments, Domain-orientation, End user modifiability,
Iterative design, Interoperability, Tailorability, Task-specificity | |||
| Time Affordances: The Time Factor in Diagnostic Usability Heuristics | | BIBAK | HTML | 186-193 | |
| Alex Paul Conn | |||
| A significant body of usability work has addressed the issue of response
time in interactive systems. The sharp increase in desktop and networked
systems changes the user's focus to a more active diagnostic viewpoint.
Today's more experienced networked user is now engaged in complicated
activities for which the issue is whether the system is carrying out the
appropriate task and how well it is proceeding with tasks that may vary in
response time from instantaneous to tens of minutes. We introduce the concept
of a time affordance and a set of principles for determining whether the
diagnostic information available to the user is rich enough to prevent
unproductive and even destructive actions due to an unclear understanding of
progress. Keywords: Usability engineering, Heuristics, Time delay, Affordances, Taxonomy,
Principles, Design rationale, Practical guidelines | |||
| Recommending and Evaluating Choices in a Virtual Community of Use | | BIBAK | HTML | 194-201 | |
| Will Hill; Larry Stead; Mark Rosenstein; George Furnas | |||
| When making a choice in the absence of decisive first-hand knowledge,
choosing as other like-minded, similarly-situated people have successfully
chosen in the past is a good strategy -- in effect, using other people as
filters and guides: filters to strain out potentially bad choices and guides to
point out potentially good choices. Current human-computer interfaces largely
ignore the power of the social strategy. For most choices within an interface,
new users are left to fend for themselves and if necessary, to pursue help
outside of the interface. We present a general his tory-of-use method that
automates a social method for informing choice and report on how it fares in
the context of a fielded test case: the selection of videos from a large set.
The positive results show that communal history-of-use data can serve as a
powerful resource for use in interfaces. Keywords: Human-computer interaction, Interaction history, Computer-supported
cooperative work, Organizational computing, Browsing, Set-top interfaces,
Resource discovery, Video on demand | |||
| Pointing the Way: Active Collaborative Filtering | | BIBAK | HTML | 202-209 | |
| David Maltz; Kate Ehrlich | |||
| Collaborative filtering is based on the premise that people looking for
information should be able to make use of what others have already found and
evaluated. Current collaborative filtering systems provide tools for readers
to filter documents based on aggregated ratings over a changing group of
readers. Motivated by the results of a study of information sharing, we
describe a different type of collaborative filtering system in which people who
find interesting documents actively send "pointers" to those documents to their
colleagues. A "pointer" contains a hypertext link to the source document as
well as contextual information to help the recipient determine the interest and
relevance of the document prior to accessing it. Preliminary data suggest that
people are using the system in anticipated and unanticipated ways, as well as
creating information "digests". Keywords: Collaborative filtering, Information retrieval, Hypertext, World Wide Web,
Lotus Notes | |||
| Social Information Filtering: Algorithms for Automating "Word of Mouth" | | BIBAK | HTML | 210-217 | |
| Upendra Shardanand; Patti Maes | |||
| This paper describes a technique for making personalized recommendations
from any type of database to a user based on similarities between the interest
profile of that user and those of other users. In particular, we discuss the
implementation of a networked system called Ringo, which makes personalized
recommendations for music albums and artists. Ringo's database of users and
artists grows dynamically as more people use the system and enter more
information. Four different algorithms for making recommendations by using
social information filtering were tested and compared. We present quantitative
and qualitative results obtained from the use of Ringo by more than 2000
people. Keywords: Social information filtering, Personalized recommendation systems, User
modeling, Information retrieval, Intelligent systems, CSCW | |||
| A Comparison of User Interfaces for Panning on a Touch-Controlled Display | | BIBAK | HTML | 218-225 | |
| Jeff A. Johnson | |||
| An experiment was conducted to determine which of several candidate user
interfaces for panning is most usable and intuitive: panning by pushing the
background, panning by pushing the view/window, and panning by touching the
side of the display screen. Twelve subjects participated in the experiment,
which consisted of three parts: 1) subjects were asked to suggest panning user
interfaces that seemed natural to them, 2) subjects each used three different
panning user interfaces to perform a structured panning task, with
experimenters recording their performance, and 3) subjects were asked which of
the three panning methods they preferred. One panning method, panning by
pushing the background, emerged as superior in performance and user preference,
and slightly better in intuitiveness than panning by touching the side of the
screen. Panning by pushing the view/window fared poorly relative to the others
on all measures. Keywords: Touch display, Touchscreen, Panning, Scrolling, Navigation | |||
| Pre-Screen Projection: From Concept to Testing of a New Interaction Technique | | BIBAK | HTML | 226-233 | |
| Deborah Hix; James N. Templeman; Robert J. K. Jacob | |||
| Pre-screen projection is a new interaction technique that allows a user to
pan and zoom integrally through a scene simply by moving his or her head
relative to the screen. The underlying concept is based on real-world visual
perception, namely, the fact that a person's view changes as the head moves.
Pre-screen projection tracks a user's head in three dimensions and alters the
display on the screen relative to head position, giving a natural perspective
effect in response to a user's head movements. Specifically, projection of a
virtual scene is calculated as if the scene were in front of the screen. As a
result, the visible scene displayed on the physical screen expands (zooms)
dramatically as a user moves nearer. This is analogous to the real world,
where the nearer an object is, the more rapidly it visually expands as a person
moves toward it. Further, with pre-screen projection a user can navigate (pan
and zoom) around a scene integrally, as one unified activity, rather than
performing panning and zooming as separate tasks. This paper describes the
technique, the real-world metaphor on which it is conceptually based, issues
involved in iterative development of the technique, and our approach to its
empirical evaluation in a realistic application testbed. Keywords: Interaction techniques, Empirical studies, Pre-screen projection, Egocentric
projection, Formative evaluation, User tasks, Input devices and strategies,
Interaction styles, Input/output devices, Polhemus tracker, Visualization,
Metaphors, User interface component | |||
| Space-Scale Diagrams: Understanding Multiscale Interfaces | | BIBAK | HTML | 234-241 | |
| George W. Furnas; Benjamin B. Bederson | |||
| Big information worlds cause big problems for interfaces. There is too much
to see. They are hard to navigate. An armada of techniques has been proposed
to present the many scales of information needed. Space-scale diagrams provide
an analytic framework for much of this work. By representing both a spatial
world and its different magnifications explicitly, the diagrams allow the
direct visualization and analysis of important scale related issues for
interfaces. Keywords: Zoom views, Multiscale interfaces, Fisheye views, Information visualization,
GIS, Visualization, User interface components, Formal methods, Design rationale | |||
| User Embodiment in Collaborative Virtual Environments | | BIBAK | HTML | 242-249 | |
| Steve Benford; John Bowers; Lennart E. Fahlen; Chris Greenhalgh; Dave Snowdon | |||
| This paper explores the issue of user embodiment within collaborative
virtual environments. By user embodiment we mean the provision of users with
appropriate body images so as to represent them to others and also to
themselves. By collaborative virtual environments we mean multi-user virtual
reality systems which explicitly support co-operative work (although we argue
that the results of our exploration may also be applied to other kinds of
collaborative system). The main part of the paper identifies a list of
embodiment design issues including: presence, location, identity, activity,
availability, history of activity, viewpoint, actionpoint, gesture, facial
expression, voluntary versus involuntary expression, degree of presence,
reflecting capabilities, physical properties, active bodies, time and change,
manipulating your view of others, representation across multiple media,
autonomous and distributed body parts, truthfulness and efficiency. Following
this, we show how these issues are reflected in our own DIVE and MASSIVE
prototype systems and also show how they can be used to analyse several other
existing collaborative systems. Keywords: Virtual reality, CSCW, Embodiment | |||
| Providing Assurances in a Multimedia Interactive Environment | | BIBAK | HTML | 250-256 | |
| Doree Duncan Seligmann; Rebecca T. Mercuri; John T. Edmark | |||
| In ordinary telephone calls, we rely on cues for the assurance that the
connection is active and that the other party is listening to what we are
saying. For instance, noise on the line (whether it be someone's voice,
traffic sounds, or background static from a bad connection) tells us about the
state of our connection. Similarly, the occasional "uhuh" or muffled sounds
from a side conversation tells us about the focus and activity of the person on
the line. Conventional telephony is based on a single connection for
communication between two as such, it has relatively simple assurance needs.
Multimedia, multiparty systems increase the complexity of the communication in
two orthogonal directions, leading to a concomitant increase in assurance
needs. As the complexity of these systems and services grows, it becomes
increasingly difficult for users to assess the current state of these services
and the level of the user interactions within the systems.
We have addressed this problem through the use of assurances that are designed to provide information about the connectivity, presence, focus, and activity in an environment that is part virtual and part real. We describe how independent network media services (a virtual meeting room service, a holophonic sound service, an application sharing service, and a 3D augmented reality visualization system) were designed to work together, providing users with coordinated cohesive assurances for virtual contexts in multimedia, multiparty communication and interaction. Keywords: Auditory I/O, Communication, Virtual reality, Visualization, Graphics,
Teleconferencing, Telepresence, User-interfaces | |||
| A Virtual Window on Media Space | | BIBAK | HTML | 257-264 | |
| William W. Gaver; Gerda Smets; Kees Overbeeke | |||
| The Virtual Window system uses head movements in a local office to control
camera movement in a remote office. The result is like a window allowing
exploration of remote scenes rather than a flat screen showing moving pictures.
Our analysis of the system, experience implementing a prototype, and
observations of people using it, combine to suggest that it may help overcome
the limitations of typical media space configurations. In particular, it seems
useful in offering an expanded field of view, reducing visual discontinuities,
allowing mutual negotiation of orientation, providing depth information, and
supporting camera awareness. The prototype we built is too large, noisy, slow
and inaccurate for extended use, but it is valuable in opening a space of
possibilities for the design of systems that allow richer access to remote
colleagues. Keywords: CSCW, Groupwork, Media spaces, Video | |||
| Virtual Reality on a WIM: Interactive Worlds in Miniature | | BIBAK | HTML | 265-272 | |
| Richard Stoakley; Matthew J. Conway; Randy Pausch | |||
| This paper explores a user interface technique which augments an immersive
head tracked display with a hand-held miniature copy of the virtual
environment. We call this interface technique the Worlds in Miniature (WIM)
metaphor. In addition to the first-person perspective offered by a virtual
reality system, a World in Miniature offers a second dynamic viewport onto the
virtual environment. Objects may be directly manipulated either through the
immersive viewport or through the three-dimensional viewport offered by the
WIM.
In addition to describing object manipulation, this paper explores ways in which Worlds in Miniature can act as a single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualization. The WIM metaphor offers multiple points of view and multiple scales at which the user can operate, without requiring explicit modes or commands. Informal user observation indicates that users adapt to the Worlds in Miniature metaphor quickly and that physical props are helpful in manipulating the WIM and other objects in the environment. Keywords: Virtual reality, Three-dimensional interaction, Two-handed interaction,
Information visualization | |||
| The "Prince" Technique: Fitts' Law and Selection Using Area Cursors | | BIBAK | 273-279 | |
| Paul Kabbash; William Buxton | |||
| In most GUIs, selection is effected by placing the point of the mouse-driven
cursor over the area of the object to be selected. Fitts' law is commonly used
to model such target acquisition, with the term A representing the amplitude,
or distance, of the target from the cursor, and W the width of the target area.
As the W term gets smaller, the index of difficulty of the task increases. The
extreme case of this is when the target is a point. In this paper, we show
that selection in such cases can be facilitated if the cursor is an area,
rather than a point. Furthermore, we show that when the target is a point and
the width of the cursor is W, that Fitts' law still holds. An experiment is
presented and the implications of the technique are discussed for both 2D and
3D interfaces. Keywords: Input techniques, Graphical user interfaces, Fitts' law, Haptic input | |||
| Applying Electric Field Sensing to Human-Computer Interfaces | | BIBAK | HTML | 280-287 | |
| Thomas G. Zimmerman; Joshua R. Smith; Joseph A. Paradiso; David Allport; Neil Gershenfeld | |||
| A non-contact sensor based on the interaction of a person with electric
fields for human-computer interface is investigated. Two sensing modes are
explored: an external electric field shunted to ground through a human body,
and an external electric field transmitted through a human body to stationary
receivers. The sensors are low power (milliwatts), high resolution
(millimeter) low cost (a few dollars per channel), have low latency
(millisecond), high update rate (1 kHz), high immunity to noise (>72 dB), are
not affected by clothing, surface texture or reflectivity, and can operate on
length scales from microns to meters. Systems incorporating the sensors
include a finger mouse, a room that knows the location of its occupant, and
people-sensing furniture. Haptic feedback using passive materials is
described. Also discussed are empirical and analytical approaches to transform
sensor measurements into position information. Keywords: User interface, Input device, Gesture interface, Non-contact sensing,
Electric field | |||
| Learning to Write Together Using Groupware | | BIBAK | HTML | 288-295 | |
| Alex Mitchell; Ilona Posner; Ronald Baecker | |||
| Most studies of collaborative writing have focused on mature writers who
have extensive experience with the process of writing together. Typically,
these studies also deal with short, somewhat artificial tasks carried out in a
laboratory, and thus do not extend over a period of time as real writing
usually does.
This paper describes an ethnographic study of collaborative writing by two groups of 4 grade six students using synchronous collaborative writing software for one hour per week over a 12 week period. Despite initially having little appreciation of what it means to write together, and no experience in synchronous collaborative writing, both groups produced nearly one dozen short collaboratively conceived, written, and edited documents by the end of the study. A careful analysis of video tape records, written documents, questionnaires, and interviews demonstrated the importance of concepts such as awareness, ownership, and control in the writing process, and highlighted many examples of strengths and weaknesses in the writing software. Keywords: CSCW, Groupware, Group work, Collaborative writing, Learning to write,
Novice writers, Ethnography | |||
| Electronic Futures Markets versus Floor Trading: Implications for Interface Design | | BIBAK | HTML | 296-303 | |
| Satu S. Parikh; Gerald L. Lohse | |||
| The primary concern in designing an interface for an electronic trading
system is the impact on market liquidity [9]. Current systems make use of
efficient order-execution algorithms but fail to capture elements of the
trading floor that contribute to an efficient market [9]. We briefly describe
tasks conducted in futures pit trading and current off-hours electronic trading
systems. Understanding the tasks helps define key components to an interface
for electronic trading. These include visualization of the market and its
participants, a trading process which allows active participation and price
discovery as well as concurrent interaction among each of the participants. Keywords: Futures trading, Automated exchange, Trading pits, Interface design,
Electronic markets | |||
| Dinosaur Input Device | | BIBAK | HTML | 304-309 | |
| Brian Knep; Craig Hayes; Rick Sayre; Tom Williams | |||
| We present a system for animating an articulate figure using a physical
skeleton, or armature, connected to a workstation. The skeleton is covered
with sensors that monitor the orientations of the joints and send this
information to the computer via custom-built hardware. The system is precise,
fast, compact, and easy to use. It lets traditional stop-motion animators
produce animation on a computer without requiring them to learn complex
software. The working environment is very similar to the traditional
environment but without the nuisances of lights, a camera, and delicate
foam-latex skin. The resulting animation lacks the artifacts of stop-motion
animation, the pops and jerkiness, and yet retains the intentional subtleties
and hard stops that computer animation often lacks. Keywords: Entertainment applications, Motion capture, Animation | |||
| Dynamic Stereo Displays | | BIBAK | HTML | 310-316 | |
| Colin Ware | |||
| Based on a review of the facts about human stereo vision, a case is made
that the stereo processing mechanism is highly flexible. Stereopsis seems to
provide only local additional depth information, rather than defining the
overall 3D geometry of a perceived scene. New phenomenological and
experimental evidence is presented to support this view. The first
demonstration shows that kinetic depth information dominates stereopsis in a
depth cue conflict. Experiment 1 shows that dynamic changes in effective eye
separation are not noticed if they occur over a period of a few seconds.
Experiment 2 shows that subjects who are given control over their effective eye
separation, can comfortably work with larger than normal eye separations when
viewing a low relief scene. Finally, an algorithm is presented for the
generation of dynamic stereo images designed to reduce the normal eye strain
that occurs due to the mis-coupling of focus and vergence cues. Keywords: Stereo displays, Virtual reality, 3D displays | |||
| Transparent Layered User Interfaces: An Evaluation of a Display Design to Enhance Focused and Divided Attention | | BIBAK | HTML | 317-324 | |
| Beverly L. Harrison; Hiroshi Ishii; Kim J. Vicente; William A. S. Buxton | |||
| This paper describes a new research program investigating graphical user
interfaces from an attentional perspective (as opposed to a more traditional
visual perception approach). The central research issue is how we can better
support both focusing attention on a single interface object (without
distraction from other objects) and dividing or time sharing attention between
multiple objects (to preserve context or global awareness). This attentional
trade-off seems to be a central but as yet comparatively ignored issue in many
interface designs. To this end, this paper proposes a framework for
classifying and evaluating user interfaces with semi-transparent windows,
menus, dialogue boxes, screens, or other objects. Semi-transparency fits into
a more general proposed display design space of "layered" interface objects.
We outline the design space, task space, and attentional issues which motivated
our research. Our investigation is comprised of both empirical evaluation and
more realistic application usage. This paper reports on the empirical results
and summarizes some of the application findings. Keywords: Display design, Evaluation, Transparency, User interface design, Interaction
technology | |||
| User-Centered Video: Transmitting Video Images Based on the User's Interest | | BIBAK | HTML | 325-330 | |
| Kimaya Yamaashi; Yukihiro Kawamata; Masayuki Tani; Hidekazu Matsumoto | |||
| Many applications, such as video conference systems and remotely controlled
systems, need to transmit multiple video images through narrow band networks.
However, high quality transmission of the video images is not possible within
the network bandwidth.
This paper describes a technique, User-Centered Video (UCV), which transmits multiple video images through a network by changing quality of the video images based on a user's interest. The UCV assigns a network data rate to each video image in proportion to the user's interest. The UCV transmits video images of interest with high quality, while degrading the remaining video images. The video images are degraded in the space and time domains (e.g., spatial resolution, frame rate) to fit them into the assigned data rates. The UCV evaluates the degree of the user's interest based on the window layouts. The user thereby obtains both the video images of interest, in detail, and the global context of video images, even through a narrow band network. Keywords: Networks or communication, Digital video, Compression, User's interest,
Computing resources | |||
| Visualizing Complex Hypermedia Networks through Multiple Hierarchical Views | | BIBAK | HTML | 331-337 | |
| Sougata Mukherjea; James D. Foley; Scott Hudson | |||
| Our work concerns visualizing the information space of hypermedia systems
using multiple hierarchical views. Although overview diagrams are useful for
helping the user to navigate in a hypermedia system, for any real-world system
they become too complicated and large to be really useful. This is because
these diagrams represent complex network structures which are very difficult to
visualize and comprehend. On the other hand, effective visualizations of
hierarchies have been developed. Our strategy is to provide the user with
different hierarchies, each giving a different perspective to the underlying
information space to help the user better comprehend the information. We
propose an algorithm based on content and structural analysis to form
hierarchies from hypermedia networks. The algorithm is automatic but can be
guided by the user. The multiple hierarchies can be visualized in various
ways. We give examples of the implementation of the algorithm on two
hypermedia systems. Keywords: Hypermedia, Overview diagrams, Information visualization, Hierarchization | |||
| SageBook: Searching Data-Graphics by Content | | BIBAK | HTML | 338-345 | |
| Mei C. Chuah; Steven F. Roth; John Kolojejchick; Joe Mattis; Octavio Juarez | |||
| Currently, there are many hypertext-like tools and database retrieval
systems that use keyword search as a means of navigation. While useful for
certain tasks, keyword search is insufficient for browsing databases of
data-graphics. SageBook is a system that searches among existing
data-graphics, so that they can be reused with new data. In order to fulfill
the needs of retrieval and reuse, it provides: 1) a direct manipulation,
graphical query interface; 2) a content description language that can express
important relationships for retrieving data-graphics; 3) automatic description
of stored data-graphics based on their content; 4) search techniques sensitive
to the structure and similarity among data-graphics; 5) manual and automatic
adaptation tools for altering data-graphics so that they can be reused with new
data. Keywords: Data-visualization, Data-graphic design, Automatic presentation, Intelligent
interfaces, Content-based search, Image-retrieval, Information-retrieval | |||
| Finding and Using Implicit Structure in Human-Organized Spatial Layouts of Information | | BIBAK | HTML | 346-353 | |
| Frank M., III Shipman; Catherine C. Marshall; Thomas P. Moran | |||
| Many interfaces allow users to manipulate graphical objects, icons
representing underlying data or the data themselves, against a spatial backdrop
or canvas. Users take advantage of the flexibility offered by spatial
manipulation to create evolving lightweight structures. We have been
investigating these implicit organizations so we can support user activities
like information management or exploratory analysis. To accomplish this goal,
we have analyzed the spatial structures people create in diverse settings and
tasks, developed algorithms to detect the common structures we identified in
our survey, and experimented with new facilities based on recognized structure.
Similar recognition-based functionality can be used within many common
applications, providing more support for users' activities with less attendant
overhead. Keywords: Emergent structure, Spatial diagrams, Spatial structure recognition,
Informal systems, Hypermedia | |||
| Comparison of Face-To-Face and Distributed Presentations | | BIBAK | HTML | 354-361 | |
| Ellen A. Isaacs; Trevor Morris; Thomas K. Rodriguez; John C. Tang | |||
| As organizations become distributed across multiple sites, they are looking
to technology to help support enterprise-wide communication and training to
distant locations. We developed an application called Forum that broadcasts
live video, audio, and slides from a speaker to distributed audiences at their
computer desktops. We studied how distributed presentations over Forum
differed from talks given in face-to-face settings. We found that Forum
attracted larger audiences, but the quality of interaction was perceived to be
lower. Forum appeared to provide more flexible and effective use of slides and
other visual materials. On the whole, audiences preferred to watch talks over
Forum but speakers preferred to give talks in a local setting. The study
raises issues about how to design this technology and how to help people
discover effective ways of using it. Keywords: Distributed presentations, Distance learning, Computer-supported cooperative
work (CSCW), Video conferencing, Multimedia, Organizational communication | |||
| What Mix of Video and Audio is Useful for Small Groups Doing Remote Real-Time Design Work? | | BIBAK | HTML | 362-368 | |
| Judith S. Olson; Gary M. Olson; David K. Meader | |||
| This study reports the second in a series of related studies of the ways in
which small groups work together, and the effects of various kinds of
technology support. In this study groups of three people worked for an hour
and a half designing an Automated Post Office. Our previous work showed that
people doing this task produced higher quality designs when they were able to
use a shared-editor to support their emerging design. This study compares the
same kinds of groups now working at a distance, connected to each other both by
this shared editor and either with high-quality stereo audio or the same audio
plus high-quality video. The video was arranged so that people made eye
contact and spatial relations were preserved, allowing people to have a sense
of who was doing what in a way similar to that in face-to-face work. Results
showed that with video, work was as good in quality as that face-to face; with
audio only, the quality of the work suffered a small but significant amount.
When working at a distance, however, groups spent more time clarifying to each
other and talking longer about how to manage their work. Furthermore, groups
rated the audio-only condition as having a lower discussion quality, and
reported more difficulty communicating Perceptions suffer without video, and
work is accomplished in slightly different manner, but the quality of work
suffers very little. Keywords: Group support system, Remote work, Concurrent editing, Small group behavior,
Desktop video | |||
| Designing SpeechActs: Issues in Speech User Interfaces | | BIBAK | HTML | 369-376 | |
| Nicole Yankelovich; Gina-Anne Levow; Matt Marx | |||
| SpeechActs is an experimental conversational speech system. Experience with
redesigning the system based on user feedback indicates the importance of
adhering to conversational conventions when designing speech interfaces,
particularly in the face of speech recognition errors. Study results also
suggest that speech-only interfaces should be designed from scratch rather than
directly translated from their graphical counterparts. This paper examines a
set of challenging issues facing speech interface designers and describes
approaches to address some of these challenges. Keywords: Speech interface design, Speech recognition, Auditory I/O, Discourse,
Conversational interaction | |||
| Integrating Task and Software Development for Object-Oriented Applications | | BIBAK | HTML | 377-384 | |
| Mary Beth Rosson; John M. Carroll | |||
| We describe an approach to developing object-oriented applications that
seeks to integrate the design of user tasks with the design of software
implementing these tasks. Using the Scenario Browser -- an experimental
environment for developing Smalltalk applications -- a designer employs a
single set of task scenarios to envision and reason about user needs and
concerns and to experiment with and refine object-oriented software
abstractions. We argue that the shared context provided by the scenarios
promotes rapid feedback between usage and software concerns, so that mutual
constraints and opportunities can be recognized and addressed early and
continuingly in the development process. Keywords: Prototyping, Design tools, Scenarios, Object-oriented programming, Software
engineering, Design rationale | |||
| Using Computational Critics to Facilitate Long-Term Collaboration in User Interface Design | | BIBAK | HTML | 385-392 | |
| Uwe Malinowski; Kumiyo Nakakoji | |||
| User interface design and end-user adaptation during the use of the system
should be viewed as an ongoing collaborative design process among interface
designers and end-users. Existing approaches have focused on the two
activities separately and paid little attention to integration of the two by
supporting their asynchronous collaboration over a long period of time
throughout the evolution of the interface design. Our knowledge-based
domain-oriented user interface design environments serve both as design media
and as communication media among interface designers and end-users. An
embedded computational critiquing mechanism not only identifies possible
problematic situations in a design for user interface designers and end-users
but also facilitates asynchronous communication among stakeholders. The
presentation of critiquing messages often triggers designers and end-users to
articulate design rationale by describing how they responded to the critiques.
The recorded design rationale mediates collaboration among end-users and user
interface designers during the end-user adaptation and redesign of the
interface by providing background context for a design decision. Keywords: Usability engineering, Collaborative design, Design rationale, User
interface design environments, Critiquing systems, End-user adaptation, Process
control | |||
| A Theoretically Motivated Tool for Automatically Generating Command Aliases | | BIBAK | HTML | 393-400 | |
| Sarah Nichols; Frank E. Ritter | |||
| A useful approach towards improving interface design is to incorporate known
HCI theory in design tools. As a step toward this, we have created a tool
incorporating several known psychological results (e.g., alias generation rules
and the keystroke model). The tool, simple additions to a spreadsheet
developed for psychology, helps create theoretically motivated aliases for
command line interfaces, and could be further extended to other interface
types. It was used to semi-automatically generate a set of aliases for the
interface to a cognitive modelling system. These aliases reduce typing time by
approximately 50%. Command frequency data, necessary for computing time
savings and useful for arbitrating alias clashes, can be difficult to obtain.
We found that expert users can quickly provide useful and reasonably consistent
estimates, and that the time savings predictions were robust across their
predictions and when compared with a uniform command frequency distribution. Keywords: HCI design tools, Keystroke-Level Model, Design problem solving | |||
| A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies | | BIBAK | HTML | 401-408 | |
| John Lamping; Ramana Rao; Peter Pirolli | |||
| We present a new focus+context (fisheye) technique for visualizing and
manipulating large hierarchies. Our technique assigns more display space to a
portion of the hierarchy while still embedding it in the context of the entire
hierarchy. The essence of this scheme is to lay out the hierarchy in a uniform
way on a hyperbolic plane and map this plane onto a circular display region.
This supports a smooth blending between focus and context, as well as
continuous redirection of the focus. We have developed effective procedures
for manipulating the focus using pointer clicks as well as interactive
dragging, and for smoothly animating transitions across such manipulation. A
laboratory experiment comparing the hyperbolic browser with a conventional
hierarchy browser was conducted. Keywords: Hierarchy display, Information visualization, Fisheye display, Focus+Context
technique | |||
| GeoSpace: An Interactive Visualization System for Exploring Complex Information Spaces | | BIBAK | HTML | 409-414 | |
| Ishantha Lokuge; Suguru Ishizaki | |||
| This paper presents a reactive interface display which allows information
seekers to explore complex information spaces. We have adopted information
seeking dialogue as a fundamental model of interaction and implemented a
prototype system in the mapping domain -- GeoSpace -- which progressively
provides information upon a user's input queries. Domain knowledge is
represented in a form of information presentation plan modules, and an
activation spreading network technique is used to determine the relevance of
information. The reactive nature of the activation spreading network, combined
with visual design techniques, such as typography, color and transparency
enables the system to support the information seeker in exploring the complex
information space. The system also incorporates a simple learning mechanism
which enables the system to adapt the display to a particular user's
preferences. GeoSpace allows users to rapidly identify information in a dense
display and it can guide a users' attention in a fluid manner while preserving
overall context. Keywords: Interactive techniques, Intelligent interfaces, Cartography, Multi-layer,
Graphics presentation, Activation spreading network | |||
| Enhanced Dynamic Queries via Movable Filters | | BIBAK | HTML | 415-420 | |
| Ken Fishkin; Maureen C. Stone | |||
| Traditional database query systems allow users to construct complicated
database queries from specialized database language primitives. While powerful
and expressive, such systems are not easy to use, especially for browsing or
exploring the data. Information visualization systems address this problem by
providing graphical presentations of the data and direct manipulation tools for
exploring the data. Recent work has reported the value of dynamic queries
coupled with two-dimensional data representations for progressive refinement of
user queries. However, the queries generated by these systems are limited to
conjunctions of global ranges of parameter values. In this paper, we extend
dynamic queries by encoding each operand of the query as a Magic Lens filter.
Compound queries can be constructed by overlapping the lenses. Each lens
includes a slider and a set of buttons to control the value of the filter
function and to define the composition operation generated by overlapping the
lenses. We demonstrate a system that supports multiple, simultaneous, general,
real-valued queries on databases with incomplete data, while maintaining the
simple visual interface of dynamic query systems. Keywords: Viewing filter, Lens, Database query, Dynamic queries, Magic lens,
Visualization | |||
| Turning Research into Practice: Characteristics of Display-Based Interaction | | BIBAK | HTML | 421-428 | |
| Marita Franzke | |||
| This research investigates how several characteristics of display-based
systems support or hinder the exploration and retention of the functions needed
to perform tasks in a new application. In particular it is shown how the
combination of the type of interface action, the number of interaction objects
presented on the screen, and the quality of the label associated with these
objects interact in supporting discovery and retention of the functionality
embedded in those systems. An experiment is reported which provides empirical
evidence for Polson & Lewis's CE+ theory of exploratory learning of computer
systems [11]. It also extends this theory and therefore leads to a refinement
of the cognitive walkthrough procedure that was derived from it. The study
uses an experimental method that combines observations from realistically
complex task scenarios with a detailed analysis of the observed performance. Keywords: Exploration, Retention, Display-based systems, Direct manipulation,
Cognitive theory, Cognitive walkthrough, Experimental method | |||
| Learning and Using the Cognitive Walkthrough Method: A Case Study Approach | | BIBAK | HTML | 429-436 | |
| Bonnie E. John; Hilary Packer | |||
| We present a detailed case study, drawn from many information sources, of a
computer scientist learning and using Cognitive Walkthroughs to assess a
multi-media authoring tool. This study results in several clear messages to
both system designers and to developers of evaluation techniques: this
technique is currently learnable and usable, but there are several areas where
further method-development would greatly contribute to a designer's use of the
technique. In addition, the emergent picture of the process this evaluator
went through to produce his analysis sets realistic expectations for other
novice evaluators who contemplate learning and using Cognitive Walkthroughs. Keywords: Usability engineering, Inspection methods, Cognitive Walkthrough | |||
| What Help Do Users Need?: Taxonomies for On-Line Information Needs and Access Methods | | BIBAK | HTML | 437-441 | |
| A. W. Roesler; S. G. McLellan | |||
| The feasibility of using a general on-line help taxonomy scheme as the
starting point for our interactive graphical applications' on-line help
specifications was investigated. We assumed that using such a taxonomy would
make it easier for users of the help system, regardless of the application
used. The literature, software conferences, trade shows, and the like point to
enormous differences of opinion about what help even IS, much less how it
should be designed, accessed, displayed, stored, or maintained. While much
research described sound design principles and access methods, very little was
available on WHAT to organize or access. Our effort on defining a taxonomy for
on-line help was based upon three tests:
* Test1, a Wizard-of-Oz usability study of an application that identified what
types of on-line help our interactive software users actually ask for; * Test2, a test that validated a general taxonomy for on-line help content for help providers, based on the results of Test1, and a general taxonomy of access methods derived from these content types; and * Test3, a repeat of Test1, substituting a prototype help system for Wizard-of-Oz help that successfully validated the usability of both on-line help content and access taxonomies for help users. This paper summarizes the results of all three tests, highlighting the proposed taxonomies and key findings about them from Test2. Together, the results from all tests indicate that a general taxonomy of information needs and the taxonomy of access methods to particular information types make it easy both for help providers to understand what information they need to supply and for help users to find the help they need quickly. Keywords: On-line help, Taxonomy, User interface, Usability, Empirical evaluation,
Methodology | |||
| Bricks: Laying the Foundations for Graspable User Interfaces | | BIBAK | HTML | 442-449 | |
| George W. Fitzmaurice; Hiroshi Ishii; William Buxton | |||
| We introduce the concept of Graspable User Interfaces that allow direct
control of electronic or virtual objects through physical handles for control.
These physical artifacts, which we call "bricks," are essentially new input
devices that can be tightly coupled or "attached" to virtual objects for
manipulation or for expressing action (e.g., to set parameters or for
initiating processes). Our bricks operate on top of a large horizontal display
surface known as the "ActiveDesk." We present four stages in the development of
Graspable UIs: (1) a series of exploratory studies on hand gestures and
grasping; (2) interaction simulations using mock-ups and rapid prototyping
tools; (3) a working prototype and sample application called GraspDraw; and (4)
the initial integrating of the Graspable UI concepts into a commercial
application. Finally, we conclude by presenting a design space for Bricks
which lay the foundation for further exploring and developing Graspable User
Interfaces. Keywords: Input devices, Graphical user interfaces, Graspable user interfaces, Haptic
input, Two-handed interaction, Prototyping, Computer augmented environments,
Ubiquitous computing | |||
| Situated Facial Displays: Towards Social Interaction | | BIBAK | HTML | 450-455 | |
| Akikazu Takeuchi; Taketo Naito | |||
| Most interactive programs have been assuming interaction with a single user.
We propose the notion of "Social Interaction" as a new interaction paradigm
between multiple humans and computers. Social interaction requires that first
a computer has the multiple participants model, second its behaviors are not
only determined by internal logic but also affected by perceived external
situations, and finally it actively joins the interaction. An experimental
system with these features was developed. It consists of three subsystems, a
vision subsystem that processes motion video input to examine an external
situation, an action/reaction subsystem that generates an action based on
internal logic of a task and a situated reaction triggered by perceived
external situation, and a facial animation subsystem that generates a
three-dimensional face capable of various facial displays. From the experiment
using the system with a number of subjects, we found that subjects generally
tended to try to interpret facial displays of the computer. Such involvement
prevented them from concentrating on a task. We also found that subjects never
recognized situated reactions of the computer that were unrelated to the task
although they unconsciously responded to them. These findings seem to imply
subliminal involvement of the subjects caused by facial displays and situated
reactions. Keywords: User interface design, Multimodal interfaces, Facial expression,
Anthropomorphism, Subliminal involvement | |||
| GloveTalkII: An Adaptive Gesture-to-Formant Interface | | BIBAK | HTML | 456-463 | |
| Sidney Fels; Geoffrey Hinton | |||
| Glove-TalkII is a system which translates hand gestures to speech through an
adaptive interface. Hand gestures are mapped continuously to 10 control
parameters of a parallel formant speech synthesizer. The mapping allows the
hand to act as an artificial vocal tract that produces speech in real time.
This gives an unlimited vocabulary, multiple languages in addition to direct
control of fundamental frequency and volume. Currently, the best version of
Glove-TalkII uses several input devices (including a Cyberglove, a
ContactGlove, a polhemus sensor, and a foot-pedal), a parallel formant speech
synthesizer and 3 neural networks. The gesture-to-speech task is divided into
vowel and consonant production by using a gating network to weight the outputs
of a vowel and a consonant neural network. The gating network and the
consonant network are trained with examples from the user. The vowel network
implements a fixed, user-defined relationship between hand-position and vowel
sound and does not require any training examples from the user. Volume,
fundamental frequency and stop consonants are produced with a fixed mapping
from the input devices. One subject has trained for about 100 hours to speak
intelligibly with Glove-TalkII. He passed through eight distinct stages while
learning to speak. He speaks slowly with speech quality similar to a
text-to-speech synthesizer but with far more natural-sounding pitch variations. Keywords: Gesture-to-speech device, Gestural input, Speech output, Speech acquisition,
Adaptive interface, Talking machine | |||
| Pictures as Input Data | | BIBAK | HTML | 464-471 | |
| Douglas C. Kohlert; Dan R., Jr. Olsen | |||
| This paper suggests that there exists a large class of inherently graphical
applications that could use pictures as their primary input data. These
applications have no need to store input data in any other format and thus
eliminate the need to do conversions between input data and a graphical
representation. Since the graphical representation is the only representation
of the data, such applications allow users to edit an application's input data
by manipulating pictures in a drawing editor. Such an environment would be
ideal for users of pen-based machines since data would not have to be entered
via a keyboard, instead a gesture based drawing editor could be used. CUPID,
which is a tool for Creating User-Interfaces that use Pictures as Input Data,
is presented. Keywords: Visual languages, Picture parsing, Picture-based applications | |||
| Planning-Based Control of Interface Animation | | BIBAK | HTML | 472-479 | |
| David Kurlander; Daniel T. Ling | |||
| Animations express a sense of process and continuity that is difficult to
convey through other techniques. Although interfaces can often benefit from
animation, User Interface Management Systems (UIMSs) rarely provide the tools
necessary to easily support complex, state-dependent application output, such
as animations. Here we describe Player, an interface component that
facilitates sequencing these animations. One difficulty of integrating
animations into interactive systems is that animation scripts typically only
work in very specific contexts. Care must be taken to establish the required
context prior to executing an animation. Player employs a precondition and
postcondition-based specification language, and automatically computes which
animation scripts should be invoked to establish the necessary state. Player's
specification language has been designed to make it easy to express the desired
behavior of animation controllers. Since planning can be a time-consuming
process inappropriate for interactive systems, Player precompiles the
plan-based specification into a state machine that executes far more quickly.
Serving as an animation controller, Player hides animation script dependencies
from the application. Player has been incorporated into the Persona UIMS, and
is currently used in the Peedy application. Keywords: Animation, Planning, User interface management systems, UIMS, User interface
components, 3D interfaces | |||
| Bridging the Gulf Between Code and Behavior in Programming | | BIBAK | HTML | 480-486 | |
| Henry Lieberman; Christopher Fry | |||
| Program debugging can be an expensive, complex and frustrating process.
Conventional programming environments provide little explicit support for the
cognitive tasks of diagnosis and visualization faced by the programmer. ZStep
94 is a program debugging environment designed to help the programmer
understand the correspondence between static program code and dynamic program
execution. Some of ZStep 94's innovations include:
* An animated view of program execution, using the very same display used to
edit the source code * A window that displays values which follows the stepper's focus * An incrementally-generated complete history of program execution and output * "Video recorder" controls to run the program in forward and reverse directions and control the level of detail displayed * One-click access from graphical objects to the code that drew them * One-click access from expressions in the code to their values and graphical output Keywords: Programming environments, Psychology of programming, Debugging, Educational
applications, Software visualization | |||
| Implicit Structures for Pen-Based Systems within a Freeform Interaction Paradigm | | BIBAK | HTML | 487-494 | |
| Thomas P. Moran; Patrick Chiu; William van Melle; Gordon Kurtenbach | |||
| This paper presents a scheme for extending an informal, pen-based whiteboard
system (Tivoli on the Xerox LiveBoard) to provide a structured editing
capability without violating its free expression and ease of use. The scheme
supports list, text, table, and outline structures over handwritten scribbles
and typed text. The scheme is based on the system temporarily perceiving the
"implicit structure" that humans see in the material, which is called a WYPIWYG
(What You Perceive Is What You Get) capability. The design techniques,
principles, trade-offs, and limitations of the scheme are discussed. A notion
of "freeform interaction" is proposed to position the system with respect to
current user interface techniques. Keywords: Freeform interaction, Implicit structure, Pen-based systems, Scribbling,
Whiteboard metaphor, Informal systems, Recognition-based systems, Perceptual
support, List structures, Gestural interfaces, User interface design | |||
| Back to the Future: Pen and Paper Technology Supports Complex Group Coordination | | BIBAK | HTML | 495-502 | |
| Steve Whittaker; Heinrich Schwarz | |||
| Despite a wealth of electronic group tools for co-ordinating the software
development process, instead we find many groups choosing apparently outmoded
"material" tools in critical projects. To understand the limitations of
current electronic tools, we studied two groups, contrasting the effectiveness
of both kinds of tools. We show that the size, public location and physical
qualities of material tools engender certain crucial group processes that
current on-line technologies fail to support. A large wallboard located in a
public area promoted group interaction around the board, it enabled
collaborative problem solving, as well as informing individuals about the local
and global progress of the project. Furthermore, the public nature of the
wallboard encouraged greater commitment and updating. However, material tools
fall short on several other dimensions such as distribution, complex dependency
tracking, and versioning. We believe that some of the benefits of material
tools should be incorporated into electronic systems and suggest design
alternatives that could bring these benefits to electronic systems. Keywords: CSCW, Ethnography, Group work, Co-ordination, Group memory, Interpersonal
communications, Media, Software development | |||
| Recognition Accuracy and User Acceptance of Pen Interfaces | | BIBAK | HTML | 503-510 | |
| Clive Frankish; Richard Hull; Pam Morgan | |||
| The accuracy of handwriting recognition is often seen as a key factor in
determining the acceptability of hand-held computers that employ a pen for user
interaction. We report the results of a study in which the relationship
between user satisfaction and recogniser performance was examined in the
context of different types of target application. Subjects with no prior
experience of pen computing evaluated the appropriateness of the pen interface
for performing three different tasks that required translation of handwritten
text. The results indicate that the influence of recogniser performance on
user satisfaction depends on the task context. These findings are interpreted
in terms of the task-related costs and benefits associated with handwriting
recognition. Further analysis of recognition data showed that accuracy did not
improve as subjects became more practised. However, substantial gains in
accuracy could be achieved by selectively adapting the recogniser to deal with
a small, user-specific subset of characters. Keywords: Pen-based input, Handwriting recognition | |||
| Designing the PenPal: Blending Hardware and Software in a User-Interface for Children | | BIBAK | HTML | 511-518 | |
| Philippe Piernot; Ramon M. Felciano; Roby Stancel; Jonathan Marsh; Marc Yvon | |||
| As part of the 1994 Apple Interface Design Competition, we designed and
prototyped the PenPal, a portable communications device for children aged four
to six. The PenPal enables children to learn by creating images and sending
them across the Internet to a real audience of friends, classmates, and
teachers. A built-in camera and microphone allow children to take pictures and
add sounds or voice annotations. The pictures can be modified by plugging in
different tools and sent through the Internet using the PenPal Dock. The
limited symbolic reasoning and planning abilities, short attention span, and
pre-literacy of children in this age range were taken into account in the
PenPal design. The central design philosophy and main contribution of the
project was to create a single interface based on continuity of action between
hardware and software elements. The physical interface flows smoothly into the
software interface, with a fuzzy boundary between the two. We discuss the
design process and usability tests that went into designing the PenPal, and the
insights that we gained from the project. Keywords: Hardware and software integration, User-centered design for children,
Internet and multimedia application, Educational application, Portable
computing | |||
| Amazing Animation: Movie Making for Kids | | BIBAK | HTML | 519-524 | |
| Shannon L. Halgren; Tony Fernandes; Deanna Thomas | |||
| The development of the interface for Amazing Animation was a challenging,
unique, and a rewarding experience for our Interface Design Group at Claris.
Given the constraints of a very tight timeframe and working with a user
population we were unfamiliar with, our group was able to make numerous
improvements which had a tremendous impact on the product's usability. This
having been our first time designing for and testing children, we learned
volumes about this unique user population. Design assumptions and testing
methodologies used in adult products must all be reworked for kids. This paper
describes the progression of Amazing Animation interface and points out the
lessons learned about testing and designing for kids along the way. Keywords: Interface design, Kids software, Designing for children, Testing children | |||
| Drag Me, Drop Me, Treat Me Like an Object | | BIBAK | HTML | 525-530 | |
| Annette Wagner; Patrick Curran; Robert O'Brien | |||
| This design briefing covers the major human interface design issues
encountered in the development of the Common Desktop Environment Drag and Drop
Convenience Application Programming Interface. The presentation will walk
through the icon development, user testing and the different problems and
solutions that arose during development. Keywords: Computer-human interface, Direct manipulation, Drag and drop, Common Desktop
Environment, Icons, Drag icons, Motif 1.2 | |||
| The Effects of Practical Business Constraints on User Interface Design | | BIBAK | HTML | 531-537 | |
| Debra Hershmann | |||
| In a business environment, resource, budget and schedule constraints
profoundly affect a product's user interface design. This paper describes the
design of a graphical workflow application as it was affected by compromise
between management, design and development during the product life cycle.
The product is tracked from its initial implementation as a highly functional utility with a non-standard user interface, to its brief life as a prototype representing the ultimate workflow tool. Primary focus is on the third, most recent version, and the design problems that arose in delivering a highly usable interface within practical, real world constraints. Keywords: Iterative design, Resource constraints, Compromise, Prototyping, Usability
testing | |||
| Replacing a Networking Interface "From Hell" | | BIBAK | HTML | 538-545 | |
| Roxanne F. Bradley; Linn D. Johnk | |||
| A multidisciplinary design team at Hewlett-Packard (HP) has successfully
designed a new user interface for a network troubleshooting tool. Users felt
that the new interface let them focus on the task of network troubleshooting,
thus freeing them from the details of the interface and its underlying
implementation. The design team believes that the success achieved is due to
the process used and the multidisciplinary aspect of the team.
This design review describes the process followed by the design team, the difficulties encountered, the results obtained from a comparative evaluation of the new and existing product interfaces, and the lessons learned. Keywords: User-centered design, Usability release criteria, Usability inspections,
Comparative usability testing | |||
| User-Centered Development of a Large-Scale Complex Networked Virtual Environment | | BIBAK | HTML | 546-552 | |
| Thomas W. Mastaglio; Jeanine Williamson | |||
| An integrated development team comprised of industry engineers, government
engineers, and user community representatives is developing a large-scale
complex networked virtual environment for the United States Army. The effort
is organized into concurrent engineering teams responsible for each system
component. Prototypical users who are formally called a User Optimization Team
are an integral part of the development effort. The system under development
is the Close Combat Tactical Trainer (CCTT). It is comprised of a network of
simulators and workstations which interface with a virtual environment
representing real world terrain. The nature of these systems requires user
involvement in all phases of systems engineering, software development, and
testing. The development organization and the usability engineering approaches
used are mosaics of engineering skills, knowledge and HCI techniques. Keywords: User-centered development, User evaluations, User optimization team,
Concurrent engineering, Integrated development, Spiral system development | |||
| Neither Rain, Nor Sleet, Nor Gloom of Night: Adventures in Electronic Mail | | BIBAK | HTML | 553-557 | |
| Maria Capucciati; Patrick Curran; Kimberly Donner O'Brien; Annette Wagner | |||
| This Design Briefing tells the story of the design and implementation of
Mailer, an electronic mail application being built as part of the Common
Desktop Environment, a UNIX-based desktop. The design is notable in that it
incorporates past usability data, new toolkit widgets, and compliance with a
user interface style that was being written at the time the interface was being
designed. In addition, Mailer is the product of a collaborative effort within
and across companies, where the design is orchestrated among software
developers, human interface engineers, and technical writers across the hall
and across the country. Keywords: User interface design, Electronic mail, Design collaboration, Common Desktop
Environment | |||
| The Interchange Online Network: Simplifying Information Access | | BIBAK | HTML | 558-565 | |
| Ron Perkins | |||
| The AT&T Interchange Online Network is an online service designed to foster
a sense of community while making it easy for customers to find information.
This briefing describes how numerous design iterations aided by usability
testing led to progressive refinement of the interface, specifically the
information space layout for navigation. By combining context and content,
Interchange allows orientation in a large information space. It becomes
possible to understand all that is contained in a specific area at a glance.
One design goal was to leverage editorial expertise while simultaneously taking
advantage of publishing models extended to a very large online information
space. Our overriding objective was to create an elegant, modern, and
professional information service that values the time of busy people. Testing
showed that even people who had never used an online service successfully
navigated the large information space and enjoyed using Interchange. At the
time of this writing, Interchange is at a Beta test stage and the design may be
modified by the time the briefing is presented. Keywords: On-line service, Information design, Information space, Electronic
publishing, Hypertext, Hypermedia, Interface design, Usability testing,
Information retrieval | |||
| Articulating a Metaphor through User-Centered Design | | BIBAK | HTML | 566-572 | |
| H. J. Moll-Carrillo; Gitta Salomon; Matthew Marsh; Jane Fulton Suri; Peter Spreenberg | |||
| TabWorks book metaphor enhances the standard Windows user interface,
providing an alternative way to organize applications and documents in a
familiar, easy to use environment. The TabWorks interface was designed
collaboratively by IDEO and XSoft and was based on a concept developed at Xerox
PARC. This briefing describes how a user-centered approach affected the design
of the TabWorks user interface: how the metaphor's visualization evolved and
how interaction mechanisms were selected and designed. Keywords: User-centered design, Design process, Product design, User observation,
Metaphor, Book, Tab, Application, Document, Container | |||
| Designing a "Front Panel" for Unix: The Evolution of a Metaphor | | BIBAK | HTML | 573-579 | |
| Jay Lundell; Steve Anderson | |||
| The Front Panel component of the Common Desktop Environment is a culmination
of several year's effort in designing a "dashboard-like" element for graphical
Unix desktop systems. This design was a cooperative effort between graphic
design artists, human factors professionals, and software designers, and
eventually became a cross-company effort as it was adopted for the Common
Desktop Environment. We describe the processes that emerged to support this
design, and make observations about how metaphors may evolve over time. Keywords: Metaphor, Front panel, Software design, Visual design, Workspaces, Dashboard | |||