| Is Edutainment an Oxymoron? | | BIBA | PDF | 1 | |
| Maria Klawe | |||
| Over the last few years, interest has surged in developing edutainment software, namely applications that possess the allure of electronic games while achieving educational goals. While success seems to have been achieved fairly easily for some of the more straightforward educational tasks such as math drills and learning the alphabet, combining the attractions of entertainment with the effective learning of more sophisticated concepts remains a significant challenge, with few clear successes so far. Little is known about some of the most basic issues, such as which user interfaces, formats, navigational structures, etc. work well with specific educational content'? Which activities are attractive to most girls? to most boys? What are the most effective ways of using these materials in schools? in homes'? This talk describes ongoing research on these issues in the E-GEMS project, a collaborative effort by computer scientists, education specialists, teachers, and professional game developers. | |||
| Animating Direct Manipulation Interfaces | | BIBAK | PDF | 3-12 | |
| Bruce H. Thomas; Paul Calder | |||
| If judiciously applied, the techniques of cartoon animation can enhance the
illusion of direct manipulation that many human computer interfaces strive to
present. In particular, animation can convey a feeling of substance in the
objects that a user manipulates, strengthening the sense that real work is
being done. This paper suggests some techniques that application programmers
can use to animate direct manipulation interfaces, and it describes tools that
programmers can use to easily incorporate the effects into their code.
Our approach is based on suggesting a range of animation effects by distorting the view of the manipulated object. To explore the idea, we added a warping transformation capability to the InterViews user interface toolkit and used the new transformation to build a simple drawing editor that uses animated feedback. The editor demonstrates the effectiveness of the animation for simple operations, and it shows that the technique is practical even on standard workstation hardware. Keywords: Animation, Direct manipulation, Graphical interfaces, Warp transformation,
User interface toolkits, InterViews | |||
| Amortizing 3D Graphics Optimization Across Multiple Frames | | BIBAK | PDF | 13-19 | |
| Jim Durbin; Rich Gossweiler; Randy Pausch | |||
| This paper describes a mechanism for improving rendering rates dynamically
during runtime in an interactive three-dimensional graphics application.
Well-known techniques such as transforming hierarchical geometry into a flat
list and removing redundant graphics primitives are often performed off-line on
static databases, or continuously every rendering frame. In addition, these
optimizations are usually performed over the whole database. We observe that
much of the database remains static for a fixed period of time, while other
portions are modified continuously (e.g. the camera position), or are
repeatedly modified during some finite interval (e.g. during user interaction).
We have implemented a runtime optimization mechanism which is sensitive to
repeated, local database changes. This mechanism employs timing strategies
which optimize only when the cost of optimization will be amortized over a
sufficient number of frames. Using this optimization scheme, we observe a
rendering speedup of roughly 2.5 in existing applications. We discuss our
initial implementation of this mechanism, the improved timing mechanisms, the
issues and assumptions we made, and future improvements. Keywords: Three-dimensional graphics, Interactive graphics, Real-time, Optimization,
Rendering, Virtual reality | |||
| Directness and Liveness in the Morphic User Interface Construction Environment | | BIBAK | PDF | 21-28 | |
| John H. Maloney; Randall B. Smith | |||
| Morphic is a user interface construction environment that strives to embody
directness and liveness. Directness means a user interface designer can
initiate the process of examining or changing the attributes, structure, and
behavior of user interface components by pointing at their graphical
representations directly. Liveness means the user interface is always active
and reactive -- objects respond to user actions, animations run, layout
happens, and information displays update continuously. Four implementation
techniques work together to support directness and liveness in Morphic:
structural reification, layout reification, ubiquitous animation, and live
editing. Keywords: User interface frameworks, User interface construction, Directness,
Liveness, Direct manipulation, Animation, Structural reification, Automatic
layout, Live editing | |||
| The World through the Computer: Computer Augmented Interaction with Real World Environments | | BIBAK | PDF | 29-36 | |
| Jun Rekimoto; Katashi Nagao | |||
| Current user interface techniques such as WIMP or the desktop metaphor do
not support real world tasks, because the focus of these user interfaces is
only on human-computer interactions, not on human-real world interactions. In
this paper, we propose a method of building computer augmented environments
using a situation-aware portable device. This device, called NaviCam, has the
ability to recognize the user's situation by detecting color-code IDs in real
world environments. It displays situation sensitive information by
superimposing messages on its video see-through screen. Combination of
ID-awareness and portable video-see-through display solves several problems
with current ubiquitous computers systems and augmented reality systems. Keywords: User-interface software and technology, Computer augmented environments,
Palmtop computers, Ubiquitous computing, Augmented reality, Barcode | |||
| Retrieving Electronic Documents with Real-World Objects on InteractiveDESK | | BIBAK | PDF | 37-38 | |
| Toshifumi Arai; Kimiyoshi Machii; Soshiro Kuzunuki | |||
| We are developing a computerized desk which we have named InteractiveDESK
[1]. One of the major features of the InteractiveDESK is reality awareness;
that is, the ability to respond to situational changes in the real world in
order to reduce users' workloads. In this paper, we present a new method, as
an example of the reality awareness, to retrieve electronic documents with real
objects such as paper documents or folders. Users of the InteractiveDESK can
retrieve electronic documents by just showing real objects which have links to
the electronic documents. The links are made by the users through interactions
with the InteractiveDESK. The advantage of this method is that the user can
unify the arrangement of electronic documents into the arrangement of real
objects. Keywords: Augmented reality, Computer vision, Interaction technique Note: TechNote | |||
| The Virtual Tricorder: A Uniform Interface for Virtual Reality | | BIBAK | PDF | 39-40 | |
| Matthias M. Wloka; Eliot Greenfield | |||
| We describe a new user-interface metaphor for immersive virtual reality --
the virtual tricorder. The virtual tricorder visually duplicates a
six-degrees-of-freedom input device in the virtual environment. Since we map
the input device to the tricorder one-to-one at all times, the user identifies
the two. Thus, the resulting interface is visual as well as tactile,
multipurpose, and based on a tool metaphor. It unifies many existing
interaction techniques for immersive virtual reality. Keywords: Immersive virtual reality, User-interface metaphor, 3D user interface,
Tactile feedback Note: TechNote | |||
| A 3D Tracking Experiment on Latency and its Compensation Methods in Virtual Environments | | BIBAK | PDF | 41-49 | |
| Jiann-Rong Wu; Ming Ouhyoung | |||
| In this paper, we conducted an experiment on the latency and its
compensation methods in a virtual reality application using an HMD and a 3D
head tracker. Our purpose is to make a comparison both in the simulation and
in the real task among four tracker prediction methods: the Grey system theory
based prediction proposed in 1994, the Kalman filtering which is well-known and
wide-spreading since 1991, a simple linear extrapolation, and the basic method
without prediction. In our 3D target tracing task that involved eight
subjects, who used their head motion to trace a flying target in random motion,
we have found that when the system latency is 120ms, two prediction methods,
Kalman filtering (not inertial-based) and Grey system prediction, are
significantly better than the one without prediction, and the former two
methods are equally well in performance. Typical motion trajectories of four
methods in simulation are plotted, and jittering effects are examined. In
terms or jittering at 120ms prediction length, Kalman filtering was evaluated
to have the largest. Keywords: Motion prediction, Latency in HMD, Virtual reality technology | |||
| Visual Interfaces for Solids Modeling | | BIBAK | PDF | 51-60 | |
| Cindy Grimm; David Pugmire; Mark Bloomenthal; John Hughes; Elaine Cohen | |||
| This paper explores the use of visual operators for solids modeling. We
focus on designing interfaces for free-form operators such as blends, sweeps,
and deformations, because these operators have a large number of interacting
parameters whose effects are often determined by an underlying
parameterization. In this type of interactive modeling good solutions to the
design problem have aesthetic as well as engineering components.
Traditionally, interaction with the parameters of these operators has been through text editors, curve editors, or trial-and-error with a slider bar. Parametric values have been estimated from data, but not interactively. These parameters are usually one- or two-dimensional, but the operators themselves are intrinsically three-dimensional in that they are used to model surfaces visualized in 3D. The traditional textual style of interaction is tedious and interposes a level of abstraction between the parameters and the resulting surface. A 3D visual interface has the potential to reduce or eliminate these problems by combining parameters and representing them with a higher-level visual tool. The visual tools we present not only speed up the process of determining good parameter values but also provide visual interactions that are either independent of the particular parameterizations or make explicit the effect of the parameterizations. Additionally, these tools can be manipulated in the same 3D space as the surfaces produced by the operators, supporting quick, interactive exploration of the large design space of these free-form operators. This paper discusses the difficulties in creating a coherent user interface for interactive modeling. To this end we present four principles for designing visual operators, using several free-form visual operators as concrete examples. Keywords: Computer graphics, Computational geometry and object modeling, Curve,
surface, solid, and object representations, Splines, User interfaces, | |||
| SDM: Selective Dynamic Manipulation of Visualizations | | BIBAK | PDF | 61-70 | |
| Mei C. Chuah; Steven F. Roth; Joe Mattis; John Kolojejchick | |||
| In this paper we present a new set of interactive techniques for 2D an 3D
visualizations. This set of techniques is called SDM (Selective Dynamic
Manipulation). Selective, indicating our goal for providing a high degree of
user control in selecting an object set, in selecting interactive techniques
and the properties they affect, and in the degree to which a user action
affects the visualization. Dynamic, indicating that the interactions all occur
in real-time and that interactive animation is used to provide better
contextual information to users in response to an action or operation.
Manipulation, indicating the types of interactions we provide, where users can
directly move objects and transform their appearance to perform different
tasks. While many other approaches only provide interactive techniques in
isolation, SDM supports a suite of techniques which users can combine to solve
a wide variety of problems. Keywords: Interactive technologies, Visualizations, Direct manipulation | |||
| Hands-On Demonstration: Interacting with SpeechSkimmer | | BIBAK | PDF | 71-72 | |
| Barry Arons | |||
| SpeechSkimmer is an interactive system for quickly browsing and finding
information in speech recordings. Skimming speech recordings is much more
difficult than visually scanning images, text, or video because of the slow,
linear, temporal nature of the audio channel. The SpeechSkimmer system uses a
combination of (1) time compression and pause removal, (2) automatically
finding segments that summarize a recording, and (3) interaction techniques, to
enable a speech recording to be heard quickly and at several levels of detail.
SpeechSkimmer was first presented at UIST '93 [1]. Since that time several important features have been added (see [2]). Most notable is the use of a pitch-based emphasis detection algorithm to automatically find topic introductions and summarizing statements from a recording [3, 4]. This demonstration is presented as a hands-on guide, allowing one to explore the SpeechSkimmer user interface. Keywords: Speech skimming, Time compression, Non-speech audio | |||
| Using Information Murals in Visualization Applications | | BIBAK | PDF | 73-74 | |
| Dean F. Jerding; John T. Stasko | |||
| Information visualizations must allow users to browse information spaces and
focus quickly on items of interest. Navigational techniques which utilize some
representation of the entire information space provide context to support more
detailed information views. However, the limited number of pixels on the
screen makes it difficult to completely display large information spaces. The
Information Mural is a two-dimensional, reduced representation of an entire
information space that fits entirely within a display window or screen. The
mural creates a miniature version of the information space using visual
attributes such as grayscale shading, intensity, color, and pixel size, along
with anti-aliased compression techniques. Information murals can be used as
stand-alone visualizations or in global navigational views. Keywords: Information visualization, Navigation techniques, Program visualization,
Data visualization | |||
| Grizzly Bear: A Demonstrational Learning Tool for a User Interface Specification Language | | BIBAK | PDF | 75-76 | |
| Martin R. Frank | |||
| Grizzly Bear is a new demonstrational tool for specifying user interface
behavior. It can handle multiple application windows, dynamic object
instantiation and deletion, changes to any object attribute, and operations on
sets of objects. It enables designers to experiment with rubber-banding,
deletion by dragging to a trashcan and many other interactive techniques. To
the author's best knowledge it is currently the most complete demonstrational
user interface design tool that does not base its inferencing on rule-based
guessing.
There are inherent limitations to the range of user interfaces that can ever be built by demonstration alone. Grizzly Bear is therefore designed to work hand-in-hand with a user interface specification language called the Elements, Events & Transitions model. As designers demonstrate behavior, they can watch Grizzly Bear incrementally build the corresponding textual specification, letting them learn the language on the fly. They can then apply their knowledge by modifying Grizzly Bear's textual inferences, which reduces the need for repetitive demonstrations and provides an escape mechanism for behavior that cannot be demonstrated. Keywords: Rapid prototyping, User interface specification languages, Programming by
demonstration | |||
| Demonstration of a Reading Coach that Listens | | BIBAK | PDF | 77-78 | |
| Jack Mostow; Alexander G. Hauptmann; Steven F. Roth | |||
| Project LISTEN stands for "Literacy Innovation that Speech Technology
ENables." We will demonstrate a prototype automated reading coach that displays
text on a screen, listens to a child read it aloud, and helps where needed. We
have tested successive prototypes of the coach on several dozen second graders.
[1] reports implementation details and evaluation results. Here we summarize
its functionality, the issues it raises in human-computer interaction, and how
it addresses them. We are redesigning the coach based on our experience, and
will demonstrate its successor at UIST '95. Keywords: Speech interfaces for children, Continuous speech recognition, Education,
Children, Non-readers | |||
| Speech for Multimedia Information Retrieval | | BIBAK | PDF | 79-80 | |
| Alexander G. Hauptmann; Michael J. Witbrock; Alexander I. Rudnicky; Stephen Reed | |||
| We describe the Informedia News-on-Demand system. News-on-Demand is an
innovative example of indexing and searching broadcast video and audio material
by text content. The fully-automatic system monitors TV news and allows
selective retrieval to news items based on spoken queries. The user then plays
the appropriate video "paragraph". The system runs on a Pentium PC using
MPEG-I video compression and the Sphinx-II continuous speech recognition system
[6]. Keywords: Video information retrieval, Speech recognition, News-On-Demand, Multimedia
indexing and search, Informedia | |||
| An Experimental Evaluation of Transparent User Interface Tools and Information Content | | BIBAK | PDF | 81-90 | |
| Beverly L. Harrison; Gordon Kurtenbach; Kim J. Vicente | |||
| The central research issue addressed by this paper is how we can design
computer interfaces that better support human attention and better maintain the
fluency of work. To accomplish this we propose to use semi-transparent user
interface objects. This paper reports on an experimental evaluation which
provides both valuable insights into design parameters and suggests a
systematic evaluation methodology. For this study, we used a
variably-transparent tool palette superimposed over different background
content, combining text, wire-frame or line art images, and solid images. The
experiment explores the issue of focused attention and interference, by varying
both visual distinctiveness and levels of transparency. Keywords: Display design, Evaluation, Transparency, User interface design, Interaction
technology, Toolglass | |||
| GLEAN: A Computer-Based Tool for Rapid GOMS Model Usability Evaluation of User Interface Designs | | BIBAK | PDF | 91-100 | |
| David E. Kieras; Scott D. Wood; Kasem Abotel; Anthony Hornof | |||
| Engineering models of human performance permit some aspects of usability of
interface designs to be predicted from an analysis of the task, and thus can
replace to some extent expensive user testing data. The best developed such
tools are GOMS models, which have been shown to be accurate and effective in
predicting usability of the procedural aspects of interface designs. This
paper describes a computer-based tool, GLEAN, that generates quantitative
predictions from a supplied GOMS model and a set of benchmark tasks. GLEAN is
demonstrated to reproduce the results of a case study of GOMS model application
with considerable time savings over both manual modeling as well as empirical
testing. Keywords: User-interface software and technology, Usability, Usability evaluation,
User models, GOMS models | |||
| AIDE: A Step Toward Metric-Based Interface Development Tools | | BIBAK | PDF | 101-110 | |
| Andrew Sears | |||
| Automating any part of the user interface design and evaluation process can
help reduce development costs. This paper presents a metric-based tool called
AIDE (semi-Automated Interface Designer and Evaluator) which assists designers
in creating and evaluating layouts for a given set of interface controls. AIDE
is an initial attempt to demonstrate the potential of incorporating metrics
into user interface development tools. Analyzing the interfaces produced using
AIDE provides encouraging feedback about the potential of this technique. Keywords: Metrics, Design, Evaluation | |||
| High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System | | BIBAK | PDF | 111-120 | |
| David A. Nichols; Pavel Curtis; Michael Dixon; John Lamping | |||
| Jupiter is a multi-user, multimedia virtual world intended to support
long-term remote collaboration. In particular, it supports shared documents,
shared tools, and, optionally, live audio/video communication. Users who
program can, with only moderate effort, create new kinds of shared tools using
a high-level windowing toolkit; the toolkit provides transparent support for
fully-shared widgets by default. This paper describes the low-level
communications facilities used by the implementation of the toolkit to enable
that support.
The state of the Jupiter virtual world, including application code written by users, is stored and (for code) executed in a central server shared by all of the users. This architecture, along with our desire to support multiple client platforms and high-latency networks, led us to a design in which the server and clients communicate in terms of high-level widgets and user events. As in other groupware toolkits, we need a concurrency-control algorithm to maintain common values for all instances of the shared widgets. Our algorithm is derived from a fully distributed, optimistic algorithm developed by Ellis and Gibbs [12]. Jupiter's centralized architecture allows us to substantially simplify their algorithm. This combination of a centralized architecture and optimistic concurrency control gives us both easy serializability of concurrent update streams and fast response to user actions. The algorithm relies on operation transformations to fix up conflicting messages. The best transformations are not always obvious, though, and several conflicting concerns are involved in choosing them. We present our experience with choosing transformations for our widget set, which includes a text editor, a graphical drawing widget, and a number of simpler widgets such as buttons and sliders. Keywords: UIMS, Window toolkits, CSCW, Groupware toolkits, Optimistic concurrency
control | |||
| Supporting Distributed, Concurrent, One-Way Constraints in User Interface Applications | | BIBAK | PDF | 121-132 | |
| Krishna A. Bharat; Scott E. Hudson | |||
| This paper describes Doppler a new, fast algorithm for supporting
concurrent, one-way constraints between objects situated in multiple address
spaces. Because of their declarative nature, convenience, low amortized cost,
and good match to interface tasks, constraints have been used to support a
variety of user-interface activities. Unfortunately, nearly all existing
constraint maintenance algorithms are sequential in nature, and cannot function
effectively in a concurrent or distributed setting. The Doppler algorithm
overcomes these limitations. It is a highly efficient distributed and
concurrent algorithm (based on an efficient sequential algorithm for
incremental, lazy updates). Doppler relies solely on asynchronous message
passing, and does not require shared memory, synchronized clocks, or a global
synchronization mechanism. It supports a high degree of concurrency by
efficiently tracking potential cause and effect relationships between reads and
writes, and allowing all causally independent operations to execute in
parallel. This makes it scalable, and optimizes reads and writes by minimizing
their blocking time. Keywords: Constraint maintenance, Incremental update, Concurrency, CSCW, Distributed
asynchronous algorithms, Causal consistency, Partially ordered time | |||
| Migratory Applications | | BIBAK | PDF | 133-142 | |
| Krishna A. Bharat; Luca Cardelli | |||
| We introduce a new genre of user interface applications that can migrate
from one machine to another, taking their user interface and application
contexts with them, and continue from where they left off. Such applications
are not tied to one user or one machine, and can roam freely over the network,
rendering service to a community of users, gathering human input and
interacting with people. We envisage that this will support many new
agent-based collaboration metaphors. The ability to migrate executing programs
has applicability to mobile computing as well. Users can have their
applications travel with them, as they move from one computing environment to
another. We present an elegant programming model for creating migratory
applications and describe an implementation. The biggest strength of our
implementation is that the details of migration are completely hidden from the
application programmer; arbitrary user interface applications can be migrated
by a single "migration" command. We address system issues such as robustness,
persistence and memory usage, and also human factors relating to application
design, the interaction metaphor and safety. Keywords: Application migration, Collaborative work, Interactive agents, Application
checkpointing, Mobile computing, Ubiquitous computing, Safety | |||
| Designing Auditory Interactions for PDAs | | BIBAK | PDF | 143-146 | |
| Debby Hindus; Barry Arons; Lisa Stifelman; Bill Gaver; Elizabeth Mynatt; Maribeth Back | |||
| This panel addresses issues in designing audio-based user interactions for
small, personal computing devices, or PDAs. One issue is the nature of
interacting with an auditory PDA and the interplay of affordances and form
factors. Another issue is how both new and traditional metaphors and
interaction concepts might be applied to auditory PDAs. The utility and design
of nonspeech cues are discussed, as are the aesthetic issues of persona and
narrative in designing sounds. Also discussed are commercially available sound
and speech components and related hardware tradeoffs. Finally, the social
implications of auditory interactions are explored, including privacy, fashion
and novel social interactions. Keywords: Auditory interactions, Speech interfaces, Auditory cues, PDAs, User
interactions, Interface design, Auditory icons, Sound design, Social effects of
technology | |||
| Learning from TV Programs: Application of TV Presentation to a Videoconferencing System | | BIBAK | PDF | 147-154 | |
| Tomoo Inoue; Ken-ichi Okada; Yutaka Matsushita | |||
| In this paper, we propose to direct the visual image of a videoconferencing
system. Pictures of current videoconferencing systems are often boring. We
thought presentation of pictures on TV and in movies should be studied to
improve videoconferencing. For this purpose, we investigated several debate
programs on TV. We classified all the shots into eight classes, and then
determined the duration of each shot and the transition probabilities among the
classes in order to describe the structure of TV programs. From this, rules to
control pictures have been obtained. After that, we made a two-point
multi-party videoconferencing system that utilizes the rules. The system
includes automated control of changes in camera focus. Keywords: Videoconferencing, Visual interface, Direction, TV, Computer-supported
cooperative work | |||
| Pssst: Side Conversations in the Argo Telecollaboration System | | BIBAK | PDF | 155-156 | |
| Lance Berc; Hania Gajewska; Mark Manasse | |||
| We describe side conversations, a new facility we have added to the Argo
telecollaboration system. Side conversations allow subgroups of teleconference
participants to whisper to each other. The other participants can see who is
whispering to whom, but cannot hear what is being said. Keywords: Collaborative tools, Videoconferencing, Side conversations, Groupware Note: TechNote | |||
| Argohalls: Adding Support for Group Awareness to the Argo Telecollaboration System | | BIBAK | PDF | 157-158 | |
| Hania Gajewska; Mark Manasse; Dave Redell | |||
| Members of geographically distributed work groups often complain of a
feeling of isolation and of not knowing "who is around". Argohalls attempt to
solve this problem by integrating video icons, clustered into groups
representing physical hallways, into the Argo telecollaboration system. Argo
users can "hang out" in hallways in order to keep track of the co-workers on
their projects, and they can roam other hallways to "run into" whoever happens
to be there. Keywords: Collaborative tools, Videoconferencing, Group awareness, Groupware Note: TechNote | |||
| Social Activity Indicators: Interface Components for CSCW Systems | | BIBAK | PDF | 159-168 | |
| Mark S. Ackerman; Brian Starr | |||
| Knowing what social activity is occurring within and through a
Computer-Supported Cooperative Work (CSCW) system is often very useful. This
is especially true for computer-mediated communication systems such as chat and
other synchronous applications. People will attend to these systems more
closely when they know that there is interesting activity on them.
Interface mechanisms for indicating social activity, however, are often ad-hoc, if present at all. This paper argues for the importance of displaying social activity as well as proposes a generalized mechanism for doing so. This social activity indication mechanism is built upon a new CSCW toolkit, the Cafe ConstructionKit, and the Cafe ConstructionKit provides a number of important facilities for making construction of these indicators easy and straight-forward. Accordingly, this paper presents both the Cafe ConstructionKit as a CSCW toolkit as well as a mechanism for creating activity indicators. Keywords: Computer-supported cooperative work, User interfaces, Social activity,
Awareness, Visualization, Human-computer interfaces, Information systems, CSCW | |||
| Effective User Interfaces | | BIB | -- | |
| Jim Morris | |||
| Proximal Sensing: Supporting Context Sensitive Interaction | | BIBA | PDF | 169 | |
| Bill Buxton | |||
| This talk addresses the issue of increasing complexity for the user that
accompanies new functionality. Briefly, we discuss how complexity can, through
appropriate design, be off-loaded to the system -- at least for secondary
commands. Consider photography, for example. The 35 mm SLR of a decade ago
was analogous to MS-DOS. You could do everything in theory, but in practice,
were unlikely to do anything without making an error. When we think of
photography, however, we see that there are only two primary decisions: "what"
and "when", which correspond to the two primary actions: "point" and "click".
By embedding domain-specific knowledge, modern cameras off-load all other
decisions to the computer (a.k.a. camera) with the option of overriding the
defaults. The net result is that the needs of the novice and expert are met
with a single apparatus device.
What we do in this presentation is talk about how this type of off-loading can be supported, and why this should be done. We do this by example, drawing mainly on the experiences of the Ontario Telepresence Project. | |||
| A Tool to Support Speech and Non-Speech Audio Feedback Generation in Audio Interfaces | | BIBAK | PDF | 171-179 | |
| Lisa J. Stifelman | |||
| Development of new auditory interfaces requires the integration of
text-to-speech synthesis, digitized audio, and non-speech audio output. This
paper describes a tool for specifying speech and non-speech audio feedback and
its use in the development of a speech interface, Conversational VoiceNotes.
Auditory feedback is specified as a context-free grammar, where the basic
elements in the grammar can be either words or non-speech sounds. The feedback
specification method described here provides the ability to vary the feedback
based on the current state of the system, and is flexible enough to allow
different feedback for different input modalities (e.g., speech, mouse,
buttons). The declarative specification is easily modifiable, supporting an
iterative design process. Keywords: Speech user interfaces, Auditory feedback, Text-to-speech synthesis,
Non-speech audio, Hand-held computers, Speech recognition | |||
| Automatic Generation of Task-Oriented Help | | BIBAK | PDF | 181-187 | |
| S. Pangoli; F. Paterno | |||
| This work presents an approach to the design of the software component of an
Interactive System, which supports the generation of automatic task-oriented
help. Help can easily be generated from the abstract formal specification of
the associated system without any further effort. The architectural
description is obtained in a task-driven way, where tasks are specified by
indicating temporal ordering constraints using operators to a concurrent formal
notation. The association of user tasks with software interaction objects,
which inherit constraints of related tasks, gives the information to structure
task-oriented help in an immediate way. The help given is thus more expressive
with a consequent improvement in the usability of an Interactive System. Keywords: Automatic help, Task-driven design of architectures, Development process for
interactive systems software, Formal notations | |||
| Some Design Refinements and Principles on the Appearance and Behavior of Marking Menus | | BIBAK | PDF | 189-195 | |
| Mark A. Tapia; Gordon Kurtenbach | |||
| This paper describes some design refinements on marking menus and shows how
these refinements embody interesting and relevant design principles for HCI.
These refinements are based on the design principles of: (1) maintaining visual
context, (2) hiding unnecessary information, and (3) supporting skill
development by graphical feedback. The result is a new graphical
representation and a more effective form of visual feedback and behavior for
marking menus. Keywords: Menu layout, User interface design, Pie menus, Gestures, Marking menus | |||
| Browsing the Web with a Mail/News Reader | | BIBA | PDF | 197-198 | |
| Marc H. Brown | |||
| This TechNote introduces WebCard, an integrated mail/news reader and Web
browser. As a mail/news reader, WebCard is fairly conventional; the innovation
is that Web pages are fully integrated in the mail/news reader. The user
interface is based on folders, where an "item" in a tolder can be a mail
message, news article or Web page. When displaying a Web page, users can
follow links, and the new pages will appear as items in the current folder.
Users can copy and move items between folders, forward items, and can also use
tolders to organize material on the Web, such as hotlists, query results, and
breadth-first expansions. Note: TechNote | |||
| Multiple-View Approach for Smooth Information Retrieval | | BIBAK | PDF | 199-206 | |
| Toshiyuki Masui; Mitsuru Minakuchi; George R., IV Borden; Kouichi Kashiwagi | |||
| Although various visualization techniques have been proposed for information
retrieval tasks, most of them are based on a single strategy for viewing and
navigating through the information space, and vague knowledge such as a
fragment of the name of the object is not effective for the search. In
contrast, people usually look for things using various vague clues
simultaneously. For example, in a library, people can not only walk through
the shelves to find a book they have in mind, but also they can be reminded of
the author's name by viewing the books on the shelf and check the index cards
to get more information. To enable such realistic search strategies, we
developed a multiple-view information retrieval system where data
visualization, keyword search, and category search are integrated with the same
smooth zooming interface, and any vague knowledge about the data can be
utilized to narrow the search space. Users can navigate through the
information space at will, by modifying the search area in each view. Keywords: Information visualization, Information retrieval, Smooth interface, Dynamic
query, Geographic information system | |||
| The Continuous Zoom: A Constrained Fisheye Technique for Viewing and Navigating Large Information Spaces | | BIBAK | PDF | 207-215 | |
| Lyn Bartram; Albert Ho; John Dill; Frank Henigman | |||
| Navigating and viewing large information spaces, such as
hierarchically-organized networks from complex real-time systems, suffer the
problems of viewing a large space on a small screen. Distorted-view
approaches, such as fisheye techniques, have great potential to reduce these
problems by representing detail within its larger context but introduce new
issues of focus, transition between views and user disorientation from
excessive distortion. We present a fisheye-based method which supports
multiple focus points, enhances continuity through smooth transitions between
views, and maintains location constraints to reduce the user's sense of spatial
disorientation. These are important requirements for the representation and
navigation of networked systems in supervisory control applications. The
method consists of two steps: a global allocation of space to rectangular
sections of the display, based on scale factors, followed by degree-of-interest
adjustments. Previous versions of the algorithm relied solely on relative
scale factors to assign size; we present a new version which allocates space
more efficiently using a dynamically calculated degree of interest. In
addition to the automatic system sizing, manual user control over the amount of
space assigned each area is supported. The amount of detail shown in various
parts of the network is controlled by pruning the hierarchy and presenting
those sections in summary form. Keywords: Graphical user interface, Supervisory control systems, Information space,
Hierarchical network, Information visualization, Fisheye view, Navigation | |||
| 3-Dimensional Pliable Surfaces: For the Effective Presentation of Visual Information | | BIBAK | PDF | 217-226 | |
| M. Sheelagh; T. Carpendale; David J. Cowperthwaite; F. David Fracchia | |||
| A fundamental issue in user interface design is the effective use of
available screen space, commonly referred to as the screen real estate problem.
This paper presents a new distortion-based viewing tool for exploring large
information spaces through the use of a three-dimensional pliable surface.
Arbitrarily-shaped regions (foci) on the surface may be selected and pulled
towards or pushed away from the viewer thereby increasing or decreasing the
level of detail contained within each region. Furthermore, multiple foci are
smoothly blended together such that there is no loss of context. The
manipulation and blending of foci is accomplished using a fairly simple
mathematical model based on gaussian curves. The significance of this approach
is that it utilizes precognitive perceptual cues about the three-dimensional
surface to make the distortions comprehensible, and allows the user to
interactively control the location, shape, and extent of the distortion in very
large graphs or maps. Keywords: Distortion viewing, Screen layout, 3D interactions, Information
visualization, Interface metaphors, Interface design issues | |||