| Engineering more natural interactive programming systems | | BIBAK | Full-Text | 1-2 | |
| Brad A. Myers | |||
| We are all familiar with computing systems that are used by developers to
create interactive computing systems for others. This includes the languages,
libraries, and interactive development environments that we use every day. The
Natural Programming Project has been working on tools, techniques and methods
for designing and developing these systems, using methods from the HCI and
Software Engineering fields. We have performed many studies about the barriers
developers face performing their tasks, and people's natural expression of
algorithms for new applications. We have created a wide variety of languages,
tools and techniques that take advantage of this new knowledge. User studies of
these techniques often show a dramatic impact in developer productivity. For
example, we studied novice and expert programmers debugging their code, and
found that they continuously are asking "Why" and "Why Not" questions, so we
developed the "WhyLine" debugging tool which allows programmers to directly ask
these questions of their programs and get a visualization of the answers. The
Whyline increases productivity by about a factor of two. We studied the
usability of APIs, such as the Java SDK, and discovered some common patterns
that make programmers up to 10 times slower in finding and using the
appropriate methods. This talk will provide an overview of our studies and the
resulting designs as part of the Natural Programming project. Keywords: end-user software engineering | |||
| A responsibility-based pattern language for usability-supporting architectural patterns | | BIBAK | Full-Text | 3-12 | |
| Bonnie E. John; Len Bass; Elspeth Golden; Pia Stoll | |||
| Usability-supporting architectural patterns (USAPs) were developed as a way
to explicitly connect the needs of architecturally-sensitive usability concerns
to the design of software architecture. In laboratory studies, the Cancellation
USAP was shown to significantly improve the quality of architecture designs for
supporting the ability to cancel a long-running command, sparking interest from
a large industrial organization to develop new USAPs and apply them to their
product line architecture design. The challenges of delivering the
architectural information contained in USAPs to practicing software architects
led to the development of a pattern language for USAPs based on software
responsibilities and a web-based tool for evaluating an architecture with
respect to those patterns. Keywords: software architecture, usability | |||
| StateStream: a developer-centric approach towards unifying interaction models and architecture | | BIBAK | Full-Text | 13-22 | |
| Gerwin de Haan; Frits H. Post | |||
| Complex and dynamic interaction behaviors in applications such as Virtual
Reality (VR) systems are difficult to design and develop. Reasons for this
include the complexity and limitations in specification models and their
integration with the underlying architecture, and lack of supporting
development tools. In this paper we present our StateStream approach, which
uses a dynamic programming language to bridge the gap between the behavioral
model descriptions, the underlying VR architecture and customized development
tools. Whereas the dynamic language allows full flexibility, the interaction
model adds explicit structures for interactive behavior. A dual modeling
mechanism is used to capture both discrete and continuous interaction behavior.
The models are described and executed in the dynamic language itself, unifying
the description of interaction, its execution and the connection with external
software components.
We will highlight the main features of StateStream, and illustrate how the tight integration of interaction model and architecture enables a flexible and open-ended development environment. We will demonstrate the use of StateStream in a prototype system for studying and adapting complex 3D interaction techniques for VR. Keywords: 3D interaction, model-driven engineering, python, user interface description
language | |||
| Ontology-based modularization of user interfaces | | BIBAK | Full-Text | 23-28 | |
| Heiko Paulheim | |||
| Modularization is almost the only feasible way of implementing large-scale
applications. For user interfaces, interactions involving more than one module
generate dependencies between modules. In this paper, we present a framework
that uses ontologies for building UIs from independent, loosely coupled
modules. In an example scenario, we show how that framework is used to build an
application for emergency management. Keywords: modularity, ontologies, user interfaces | |||
| Flexible and efficient platform modeling for distributed interactive systems | | BIBAK | Full-Text | 29-34 | |
| Xiao Feng Qiu; T. C. Nicholas Graham | |||
| Distributed interactive systems often rely on platform information, used for
example when migrating a user interface to a small-screen device, or when
opportunistically recruiting available peripherals. There has been to-date
little work in platform modeling for distributed applications. In this paper,
we demonstrate that distributed platform models are well supported by a publish
and subscribe architecture accompanied by a rich filtering language. This
approach allows organic construction of networks with no centralized locus of
control, high scalability and fault-tolerance, and flexible customization to
the needs of heterogeneous device types. Keywords: distributed interactive applications, groupware toolkits, platform model | |||
| Interaction engineering using the IVY tool | | BIBAK | Full-Text | 35-44 | |
| José C. Campos; Michael D. Harrison | |||
| This paper is concerned with support for the process of usability
engineering. The aim is to use formal techniques to provide a systematic
approach that is more traceable, and because it is systematic, repeatable. As a
result of this systematic process some of the more subjective aspects of the
analysis can be removed. The technique explores exhaustively those features of
a specific design that fail to satisfy a set of properties. It also analyzes
those aspects of the design where it is possible to quantify the cost of use.
The method is illustrated using the example of a medical device. While many
aspects of the approach and its tool support have already been discussed
elsewhere, this paper builds on and contrasts an analysis of the same device
provided by a third party and in so doing enhances the IVY tool. Keywords: formal methods, model-based usability analysis | |||
| Interactive usability instrumentation | | BIBAK | Full-Text | 45-54 | |
| Scott Bateman; Carl Gutwin; Nathaniel Osgood; Gordon McCalla | |||
| Usage data logged from user interactions can be extremely valuable for
evaluating software usability. However, instrumenting software to collect usage
data is a time-intensive task that often requires technical expertise as well
as an understanding of the usability issues to be explored. We have developed a
new technique for software instrumentation that removes the need for
programming. Interactive Usability Instrumentation (IUI) allows usability
evaluators to work directly with a system's interface to specify what
components and what events should be logged. Evaluators are able to create
higher-level abstractions on the events they log and are provided with
real-time feedback on how events are logged. As a proof of the IUI concept, we
have created the UMARA system, an instrumentation system that is enabled by
recent advances in aspect-oriented programming. UMARA allows users to
instrument software without the need for additional coding, and provides tools
for specification, data collection, and data analysis. We report on the use of
UMARA in the instrumentation of two large open-source projects; our experiences
show that IUI can substantially simplify the process of log-based usability
evaluation. Keywords: aspect-oriented programming, instrumentation, software logging, usability | |||
| TnToolkit: a design and analysis tool for ambiguous, QWERTY, and on-screen keypads | | BIBAK | Full-Text | 55-60 | |
| Steven J. Castellucci; I. Scott MacKenzie | |||
| The pervasive use of ambiguous keypads for mobile text entry necessitates
examination of their performance characteristics. This paper presents TnToolkit
-- a self contained tool to calculate performance measurements for ambiguous
keypads. While TnToolkit's focus is ambiguous keypads, it also works with
QWERTY and on screen keypads. By default, TnToolkit predicts words per minute
performance based on a traditional Fitts' law model, and calculates KSPC, the
average keystrokes required to enter each character of text. Existing modules
are extensible to implement custom metrics. An experiment reveals that using
TnToolkit to gather performance metrics is 69% faster than existing techniques,
without compromising accuracy. Keywords: ambiguous keypads, keystrokes-per-character, performance measurement, text
entry, toolkit, words-per-minute | |||
| A GOMSL analysis of semi-automated data entry | | BIBAK | Full-Text | 61-66 | |
| Craig Haimson; Justin Grossman | |||
| We used GOMSL (Goals, Operators, Methods, and Selection Rules Language) to
perform computational workflow analyses of two different data entry
applications: a fully manual web form client used to update an enterprise-wide
knowledge base (already in operational use), and an alternative prototype
client that uses content extraction to semi-automate data entry. Our goal was
to explore conditions that affect the speed of manual vs. semi-automated data
entry, and to quantify expected difference in relative system efficiency across
these conditions. We developed GOMSL models for major functionality in both
systems and used GLEAN (GOMS Language Evaluation and Analysis) to simulate user
interactions with representative data. Based on the results of these
simulations, we quantified workflow costs, explored how costs vary across
ranges of parameters, and developed overall estimates of relative system
efficiency. Keywords: content extraction, data entry, goms, knowledge base | |||
| Bridging the gulf between interaction engineering and human reliability assessment | | BIBAK | Full-Text | 67-68 | |
| Michael D. Harrison | |||
| The analysis and prediction of potential failure in computer based systems
is a particular concern in the development of safety critical systems in
domains such as healthcare, nuclear power and aviation. These industries invest
substantially to provide arguments for external regulators aimed at improving
confidence that a system is reliable. There is an increasing recognition that
human aspects of these systems often underlie their vulnerability to failure.
Human reliability assessment techniques have therefore been a particular focus
for development.
This short tutorial describes and illustrates techniques for human reliability assessment (HRA). These techniques are compared with interaction, and usability, engineering techniques. HRA techniques are difficult to apply and serious concerns surround their validity. A focus for the tutorial is to discuss whether cross development between the HRA and EICS communities will be of mutual benefit and in particular whether there are techniques used within the EICS community capable of enriching an argument that a system is acceptably reliable. In the context of this discussion there will be a brief illustration of the role that formal techniques might play. The tutorial will finally introduce the recent resilience engineering agenda that replaces a focus on why a system fails by a focus on why a system is resilient to failure. Keywords: human reliability assessment, interaction engineering | |||
| A toolkit for peer-to-peer distributed user interfaces: concepts, implementation, and applications | | BIBAK | Full-Text | 69-78 | |
| Jérémie Melchior; Donatien Grolaux; Jean Vanderdonckt; Peter Van Roy | |||
| In this paper we present a software toolkit for deploying peer-to-peer
distributed graphical user interfaces across four dimensions: multiple
displays, multiple platforms, multiple operating systems, and multiple users,
either independently or concurrently. This toolkit is based on the concept of
multi-purpose proxy connected to one or many rendering engines in order to
render a graphical user interface in part or whole for any user, any operating
system (Linux, Mac OS X and Windows XP or higher), any computing platform
(ranging from a pocket PC to a wall screen), and/or any display (ranging from
private to public displays). This toolkit is a genuine peer-to-peer solution in
that no computing platform is used for a server or for a client: any user
interface can be distributed across users, systems, and platforms independently
of their location, system constraints, and platform constraints. After defining
the toolkit concepts, its implementation is described, motivated, and
exemplified on two non-form based user interfaces: a distributed office
automation and a distributed interactive game. Keywords: distributed user interfaces, multi-device environments, multi-platform user
interfaces, multi-user user interfaces, peer-to-peer, ubiquitous computing,
user interface toolkit | |||
| An infrastructure for experience centered agile prototyping of ambient intelligence | | BIBAK | Full-Text | 79-84 | |
| José Luís Silva; José C. Campos; Michael D. Harrison | |||
| Ubiquitous computing poses new usability challenges that cut across design
and development. We are particularly interested in "spaces" enhanced with
sensors, public displays and personal devices. How can prototypes be used to
explore the user's mobility and interaction, both explicitly and implicitly, to
access services within these environments? Because of the potential cost of
development and design failure, the characteristics of such systems must be
explored using early versions of the system that could disrupt if used in the
target environment. Being able to evaluate these systems early in the process
is crucial to their successful development. This paper reports on an effort to
develop a framework for the rapid prototyping and analysis of ambient
intelligence systems. Keywords: 3D virtual environments, modelling, ubiquitous and context-aware computing | |||
| Support for authoring service front-ends | | BIBAK | Full-Text | 85-90 | |
| Fabio Paternò; Carmen Santoro; Lucio Davide Spano | |||
| The success of service-oriented computing has important implications on how
people develop user interfaces. This paper discusses a method for supporting
the development of interactive applications based on the access to services,
which can be associated with user interface annotations. In particular, we show
how model-based descriptions can be useful for this purpose and the design of
an authoring environment for the development of interactive front-ends of
applications based on Web services. A prototype of the authoring environment is
presented. Keywords: model-based design, user interface composition, web services | |||
| Social network analysis and interactive device design analysis | | BIBAK | Full-Text | 91-100 | |
| Harold Thimbleby; Patrick Oladimeji | |||
| What methods can we use to help understand why users adopt certain use
strategies, and how can we evaluate designs to anticipate and perhaps
positively modify how users are likely to behave? This paper proposes taking
advantage of social network analysis (SNA) to identify features of interaction.
There are plausible reasons why SNA should be relevant to interaction
programming and design, but we also show that SNA has promise, identifies and
explains interesting use phenomena, and can be used effectively on
conventionally-programmed interactive devices. Social network analysis is a
very rich field, practically and theoretically, and many further forms of
application and analysis beyond the promising examples explored in this paper
are possible. Keywords: graph theory, interaction programming, network center, social network
analysis | |||
| A bisimulation-based approach to the analysis of human-computer interaction | | BIBAK | Full-Text | 101-110 | |
| Sébastien Combéfis; Charles Pecheur | |||
| This paper discusses the use of formal methods for analysing human-computer
interaction. We focus on the mode confusion problem that arises whenever the
user thinks that the system is doing something while it is in fact doing
another thing. We consider two kinds of models: the system model describes the
actual behaviour of the system and the mental model represents the user's
knowledge of the system. The user interface is modelled as a subset of system
transitions that the user can control or observe. We formalize a full-control
property which holds when a mental model and associated user interface are
complete enough to allow proper control of the system. This property can be
verified using model-checking techniques on the parallel composition of the two
models. We propose a bisimulation-based equivalence relation on the states of
the system and show that, if the system satisfies a determinism condition with
respect to that equivalence, then minimization modulo that equivalence produces
a minimal mental model that allows full-control of the system. We enrich our
approach to take operating modes into account. We give experimental results
obtained by applying a prototype implementation of the proposed techniques to a
simple model of an air-conditioner. Keywords: bisimulation, formal methods, human-computer interaction (HCI) modelling,
mode confusion | |||
| Task-based design revisited | | BIBAK | Full-Text | 111-116 | |
| Anke Dittmar; Peter Forbrig | |||
| This paper investigates the role of task modeling in model-based design. It
is shown that task models are mainly used to support a specification-driven
design process. Models about current task situations and more specific task
descriptions play a marginal role. Task sketching is proposed to complement
specification-driven modeling activities. The co-development of representations
of current and envisioned practices is encouraged to support a shared design
understanding and creativity. A detailed example illustrates basic ideas. Task
models are represented in HOPS. Advantages of this specification formalism over
conventional task modeling are shown with respect to task sketching. Models can
be combined with illustrations, narratives or other design material at
different levels of abstraction. Animations of enriched models help to explore
the design space. Keywords: development processes for interactive systems, task modeling, task-based
design | |||
| Engineering crowd interaction within smart environments | | BIBAK | Full-Text | 117-122 | |
| Michael D. Harrison; Mieke Massink; Diego Latella | |||
| Smart environments (e.g., airports, hospitals, stadiums, and other physical
spaces using ubiquitous computing to empower many mobile people) provide novel
challenges for usability engineers. Firstly, interaction can be implicit and
therefore unintentional on the part of its users. Secondly, the impact of a
smart environment can influence the collective or crowd behavior of those
immersed within it. These challenges lead to requirements for complementary
analyses which must be combined with the more typical focus on the interaction
between user and device. The paper explores a family of stochastic models aimed
at analyzing these features with a particular focus on crowd interaction. Keywords: dynamic signage systems, formal methods, performance evaluation, process
algebra, usability analysis | |||
| AUGUR: providing context-aware interaction support | | BIBAK | Full-Text | 123-132 | |
| Melanie Hartmann; Daniel Schreiber; Max Mühlhäuser | |||
| As user interfaces become more and more complex and feature laden, usability
tends to decrease. One possibility to counter this effect are intelligent
support mechanisms. In this paper, we present AUGUR, a system that provides
context-aware interaction support for navigating and entering data in arbitrary
form-based web applications. We further report the results of an initial user
study we performed to evaluate the usability of such context-aware interaction
support.
AUGUR combines several novel approaches: (i) it considers various context sources for providing interaction support, and (ii) it contains a context store that mimics the user's short-term memory to keep track of the context information that currently influences the user's interactions. AUGUR thereby combines the advantages of the three main approaches for supporting the user's interactions, i.e. knowledge-based systems, learning agents, and end-user programming. Keywords: context, intelligent user interfaces, task model | |||
| FRAP: a framework for pervasive games | | BIBAK | Full-Text | 133-142 | |
| Jan-Peter Tutzschke; Olaf Zukunft | |||
| In this paper, we describe the design and realization of FRAP, a framework
for the construction of pervasive games. With FRAP, we focus on context-aware
multi-user chase games that include a strategic component. The game domain is
exemplified by the "capture the flag" metaphor. The playing field supported by
our framework is a combined virtual and physical world in which the player
socially interacts with other players. FRAP provides components for typical
tasks like players moving in the world solving challenges, updating the score
based on the context of the users, and checking the rules of the game. FRAP is
fully implemented on the open handset-platform a.k.a. Google Android. Based on
FRAP, a pervasive game called "King of Location" has been constructed using the
Android-platform. Through this application, a first evaluation of FRAP has been
performed. It shows a significant reduction of time to build a game that
follows the "capture the flag" metaphor. Keywords: context-awareness, distributed systems, framework, pervasive computing,
pervasive games, software architectures | |||
| Adapting ubicomp software and its evaluation | | BIBAK | Full-Text | 143-148 | |
| Malcolm Hall; Marek Bell; Alistair Morrison; Stuart Reeves; Scott Sherwood; Matthew Chalmers | |||
| We describe work in progress on tools and infrastructure to support adaptive
component-based software for mobile devices 'in our case, Apple iPhones. Our
high level aim is 'design for appropriation', i.e. system design for uses and
contexts that designers may not be able to fully predict or model in advance.
Logs of users' system operation are streamed back in real time to evaluators'
data visualisation tools, so that they can assess design problems and
opportunities. Evaluators and developers can then create new software
components that are sent to the mobile devices. These components are either
integrated automatically on the fly, or offered as recommendations for users to
accept or reject. By connecting developers, users, and evaluators, we aim to
quicken the pace of iterative design so as to improve the process of creating
and sustaining contextually fitting software. Keywords: adaptive evaluation, contextual software, ubiquitous computing | |||
| Plug-and-design: embracing mobile devices as part of the design environment | | BIBAK | Full-Text | 149-154 | |
| Jan Meskens; Kris Luyten; Karin Coninx | |||
| Due to the large amount of mobile devices that continue to appear on the
consumer market, mobile user interface design becomes increasingly important.
The major issue with many existing mobile user interface design approaches is
the time and effort that is needed to deploy a user interface design to the
target device. In order to address this issue, we propose the plug-and-design
tool that relies on a continuous multi-device mouse pointer to design user
interfaces directly on the mobile target device. This will shorten iteration
time since designers can continuously test and validate each design action they
take. Using our approach, designers can empirically learn the specialities of a
target device which will help them while creating user interfaces for devices
they are not familiar with. Keywords: GUI builder, design tools, mobile UI design | |||
| The future of design specification and verification of safety critical interactive systems.: can our systems be sure (safe, usable, reliable and evolvable)? | | BIBAK | Full-Text | 155-156 | |
| David Navarre; Philippe Palanque | |||
| Designing reliable interactive software is hard, and designing usable
reliable interactive software is even harder. Experience shows that many
interactive systems exhibit recurring characteristics that require in addition
evolvability, assessability and certify-ability especially when safety critical
systems are concerned. This tutorial projects into the future previous work we
have done over the last 15 years around a Petri nets-based notation and a CASE
tool supporting it, for addressing such aspects of interactive software
development.
The course covers the roles formal notations can play in the interactive systems' development process: * how they provide complete and unambiguous descriptions of these systems, * how they handle system complexity, * how they can fit with interactive systems development processes (highly iterative) * and how they contribute too to the implementation activities. Such elements will be addressed first by providing an historical perspective of formal descriptions techniques in the field of interactive systems and then by focusing on the Interactive Cooperative Objects notation and its CASE tool PetShop. The tutorial will also address the new challenges for formal description techniques for interactive systems in order to address on an equal basis various (generally conflicting) properties such as Safety, Usability, Reliability and Evolvability. The audience will learn on concrete examples the advantages and drawbacks of using formal description techniques for various kinds of interactive systems including WIMP, post-Wimp and multimodal interaction techniques. The examples will be taken from various industrial domains including cockpits, satellite ground segments and Air Traffic Control. Keywords: engineering interactive systems, formal description techniques,
human-computer interaction | |||
| Toward user interface virtualization: legacy applications and innovative interaction systems | | BIB | Full-Text | 157-166 | |
| Guillaume Besacier; Frédéric Vernier | |||
| Edit, inspect and connect your surroundings: a reference framework for meta-UIs | | BIBAK | Full-Text | 167-176 | |
| Geert Vanderhulst; Daniel Schreiber; Kris Luyten; Max Muhlhauser; Karin Coninx | |||
| Single-user, desktop-based computer applications are pervasive in our daily
lives and work. The prospect of using these applications with innovative
interaction systems, like multi-touch tabletops, tangible user interfaces,
large displays or public/private displays, would enable large scale field
studies of these technologies, and has the potential to significantly improve
their usefulness and, in turn, their availability. This paper focuses on the
architectural requirements, design, and implementation of such a technology.
First, we review various software technologies for using a single-user desktop
application with a different model of user inputs and graphical output. We then
present a generic technique for using any closed-source or open-source
application with different input and output devices. In our approach, the
application is separated from the user input and graphical output subsystem.
The core part of the application runs in a system-specific virtual environment.
This virtual environment exposes the same API as the removed standard
subsystems. This eliminates the need to rewrite the "legacy" application and
provides high performances by using the application native way to communicate
with the system. Keywords: legacy applications, novel interaction systems, toolkit | |||
| How usable are operational digital libraries: a usability evaluation of system interactions | | BIBAK | Full-Text | 177-186 | |
| Xiangmin Zhang; Jinjing Liu; Yuelin Li; Ying Zhang | |||
| This paper reports a usability evaluation of three operational digital
libraries (DLs): the ACM DL, the IEEE Computer Society DL, and the IEEE Xplore
DL. An experiment was conducted in a usability lab and 35 participants
completed the assigned tasks. The results demonstrate that all three DLs have
more or less usability problems by various measures. Searching in Xplore by
inexperienced users was problematic, and browsing in IEEE CS was extremely
difficult for all users. Interaction design features that caused these
usability problems were identified and discussed. The study implies there is
still large room for operational DLs to improve in order to provide more
satisfactory services. Keywords: digital libraries, interaction design, usability testing | |||
| The tradeoff between spatial jitter and latency in pointing tasks | | BIBAK | Full-Text | 187-196 | |
| Andriy Pavlovych; Wolfgang Stuerzlinger | |||
| Interactive computing systems frequently use pointing as an input modality,
while also supporting other forms of input such as alphanumeric, voice,
gesture, and force.
We focus on pointing and investigate the effects of input device latency and spatial jitter on 2D pointing speed and accuracy. First, we characterize the latency and jitter of several common input devices. Then we present an experiment, based on ISO 9241-9, where we systematically explore combinations of latency and jitter on a desktop mouse to measure how these factors affect human performance. The results indicate that, while latency has a stronger effect on human performance compared to low amounts of spatial jitter, jitter dramatically increases the error rate, roughly inversely proportional to the target size. The findings can be used in the design of pointing devices for interactive systems, by providing a guideline for choosing parameters of spatial filtering to compensate for jitter, since stronger filtering typically also increases lag. We also describe target sizes at which error rates start to increase notably, as this is relevant for user interfaces where hand tremor or similar factors play a major role. Keywords: fitts' law, jitter, latency, pointing | |||
| Input evaluation of an eye-gaze-guided interface: kalman filter vs. velocity threshold eye movement identification | | BIBAK | Full-Text | 197-202 | |
| Do Hyong Koh; Sandeep A. Munikrishne Gowda; Oleg V. Komogortsev | |||
| This paper evaluates the input performance capabilities of Velocity
Threshold (I-VT) and Kalman Filter (I-KF) eye movement detection models when
employed for eye-gaze-guided interface control. I-VT is a common eye movement
identification model employed by the eye tracking community, but it is neither
robust nor capable of handling high levels of noise present in the eye position
data. Previous research implies that use of a Kalman filter reduces the noise
in the eye movement signal and predicts the signal during brief eye movement
failures, but the actual performance of I-KF was never evaluated. We evaluated
the performance of I-VT and I-KF models using guidelines for ISO 9241 Part 9
standard, which is designed for evaluation of non keyboard/mouse input devices
with emphasis on performance, comfort, and effort. Two applications were
implemented for the experiment: 1) an accuracy test 2) a photo viewing
application specifically designed for eye-gaze-guided control. Twenty-one
subjects participated in the evaluation of both models completing a series of
tasks. The results indicates that I-KF allowed participants to complete more
tasks with shorter completion time while providing higher general comfort,
accuracy and operation speeds with easier target selection than the I-VT model.
We feel that these results are especially important to the engineers of new
assistive technologies and interfaces that employ eye-tracking technology in
their design. Keywords: eye tracker, human computer interaction, kalman filter, pointing device
evaluation | |||
| An empirical comparison of "Wiimote" gun attachments for pointing tasks | | BIBAK | Full-Text | 203-208 | |
| Victoria McArthur; Steven J. Castellucci; I. Scott MacKenzie | |||
| We evaluated and compared four input methods using the Nintendo Wii Remote
for pointing tasks. The methods used (i) the "A" button on top of the device,
(ii) the "B" button on the bottom of the device, (iii) the Intec Wii Combat
Shooter attachment and (iv) the Nintendo Wii Zapper attachment. Fitts'
throughput for all four input methods was calculated for both button-up and
button-down events. Results indicate that the throughput of the Wii Remote
using the A button is 2.85 bps for button-down events. Performance with the
Intec Wii Combat Shooter attachment was significantly worse than with the other
input methods, likely due to the trigger mechanism. Throughput for button-down
target selection using the B button was highest at 2.93 bps. Keywords: fitts' law, gaming input devices, iso 9241-9, performance evaluation, remote
pointing, Wiimote | |||
| Agile methods and interaction design: friend or foe? | | BIBAK | Full-Text | 209-210 | |
| Frank Maurer | |||
| Agile methods and interaction design can be seen as incompatible software
development methodologies: both suggest processes for creating high-quality
software -- one is arguing for quickly moving towards the source code level
while the other suggests to wait with implementation activities until the
design of the software is clearly laid out from a user's perspective. This
apparent discrepancy is surprising given that both approaches put a strong
emphasis on human aspects in software development.
Agile methods focus on creating quality software with high business value but do not explicitly talk about how to ensure that the software is usable -- this is the realm of interaction design. The presentation discusses commonalities and differences between both approaches and points towards integration opportunities: how can agile teams use interaction design approaches to create usable software with high business value. Keywords: agile methods, interaction design | |||
| A formal approach supporting the comparative predictive assessment of the interruption-tolerance of interactive systems | | BIBAK | Full-Text | 211-220 | |
| Philippe Palanque; Marco Winckler; Jean-François Ladry; Maurice H. ter Beek; Giorgio Faconti; Mieke Massink | |||
| This paper presents an approach for investigating in a predictive way
potential disruptive effects of interruptions on task performance in a
multitasking environment. The approach combines previous work in the field of
interruption analysis, formal description techniques for interactive systems
and stochastic processes to support performance analysis of user activities
constrained by the occurrence of interruptions. The approach uses formal
description techniques to provide a precise description of user tasks, and both
system and interruptions behavior. The detailed mechanism by which systems and
interruptions behave is first described using a Petri nets-based formal
description technique called Interactive Cooperative Objects (ICO). The use of
a formal modeling technique for the description of these three components makes
it possible to compare and analyze different interaction techniques. In
particular, it allows us to determine which of the system states are most
affected by the occurrence of interruptions. Once composed together, models
describing the system, user tasks and interruptions behavior are transformed
into PEPA models (i.e. Performance Evaluation Process Algebra) that are
amenable to performance analysis using the PRISM model checker. The approach is
exemplified by a simple example that models two interaction techniques for
manipulating icons in a desktop environment. Keywords: formal description techniques, human computer interaction, interruptions,
model-based approaches, performance evaluation | |||
| Contributing to safety and due diligence in safety-critical interactive systems development by generating and analyzing finite state models | | BIBAK | Full-Text | 221-230 | |
| Harold Thimbleby | |||
| Interaction programming bridges the gap between interaction design and
programming, but it has not yet been related directly to mainstream user
interface development practice. This paper presents UI model discovery tools to
enable existing systems and traditional development processes to benefit from
interaction programming tools and methods; in particular, to enable checking of
safety-critical interaction properties, and to contribute to due diligence
practices in safety-critical interactive systems design. Keywords: discovery tools, interaction programming, model checking | |||
| Usability recommendations in the design of mixed interactive systems | | BIBAK | Full-Text | 231-236 | |
| Syrine Charfi; Emmanuel Dubois; Dominique L. Scapin | |||
| Mixed Interactive Systems (MIS) are systems allowing several interaction
forms resulting from the fusion between physical and digital worlds. Such
systems being relatively new, the underlying design process leading to their
design is not entirely defined, particularly in terms of user-centered design.
The goal of this paper is to present an approach that attempts to identify,
model and integrate available usability knowledge into a user-centered approach
for the design of MIS. The approach consisted of: systematic review of the
literature on MIS; selection and deciphering of usability recommendations under
a common format; classification of the 141 usability recommendations obtained;
and application of the recommendations to the design of a MIS case study
(museum application). Keywords: interaction design, interaction modeling, mixed interactive systems, task
modeling, usability recommendations, user-centered design | |||
| User evaluation of OIDE: a rapid prototyping platform for multimodal interaction | | BIBAK | Full-Text | 237-242 | |
| Marilyn Rose McGee-Lennon; Andrew Ramsay; David McGookin; Philip Gray | |||
| The Open Interface Development Environment (OIDE) was developed as part of
the OpenInterface (OI) platform, an open source framework for the rapid
development of multimodal interactive systems. It allows the graphical
manipulation of components stored in a structured and rich repository of
modalities and interaction techniques. The platform is expected to act as a
central tool for an iterative user centred design process for multimodal
interactive system design. This paper presents a user study (N=16) designed to
explore how the platform was used in practice by multimodal interaction
designers and developers.
Participants were introduced to the features and functionality of the tool via tutorials and then engaged in an open multimodal design exercise. Participants were expected to explore various multimodal solutions to the design scenario using both traditional prototyping tools and the features available to them via the OIDE prototyping tool. The workshops were recorded and the interaction and dialogue examined to gather feedback on how the OI tool was used or could be used to support or enhance the design stages of prototyping a multimodal application or interface. The results indicate that the OI platform could be a useful tool to support the early design stages during multimodal interaction design. The tool appeared to promote thinking about and using different modalities. The teams varied in size and composition and this appears to have an effect on how the teams approached the task and exploited the OI prototyping tool. We will offer some guidelines as to how open, rapid prototyping tools such as OIDE can be improved to better support multimodal interaction design. Keywords: interaction design, multimodal interaction, open interface, rapid
prototyping, user evaluations | |||
| Context-aware and mobile interactive systems: the future of user interfaces plasticity | | BIBAK | Full-Text | 243-244 | |
| Gaëlle Calvary; Alexandre Demeure | |||
| Mobility and integration of systems into the physical environment are key
challenges for computer science. In particular, User Interfaces (UI) must
accommodate variations in the context of use while preserving human-centered
properties. We call this capacity UI plasticity. This tutorial begins by
reviewing ideas from the last decade concerning the plasticity of user
interfaces. From this starting point, it develops key ideas and perspectives
for the near future. These are illustrated with a demo of a tool for
prototyping plastic widgets and UIs.
In the near future, there will be a need for elaborating a theory of adaptation to predict and explain the difficulties that users encounter when adaptation occurs. Secondly, in order to go beyond simplistic UI adaptation, there will be a need to bring together advances in several research areas including HCI (to support multimodality), Software Engineering (in particular, Model-Driven Engineering, Aspect Oriented Programming, as well as components and services, to cover both design time and run time adaptation), as well as Artificial Intelligence (to support situated information and planning). Indeed, in most current research, the user's task model is assumed as given and is used as the starting point for generating UIs on the fly. In addition, the functional core is considered to be stable rather than compliant with opportunistic discovery of services. In the coming years, we will need to confront challenges that go beyond HCI: (1) incompleteness and uncertainty of the system perception of both the context of use and of the appropriateness of the adapted UI; (2) combinatory explosion when composing a UI for sustaining emergent users goals. Finally, we will need to develop environments (or studios) for UI Plasticity to integrate partial advances, to make the theory operational and to alleviate designers and developers task. Keywords: context-aware user interface, plastic user interface, user interface
adaptation | |||
| An open source workbench for prototyping multimodal interactions based on off-the-shelf heterogeneous components | | BIBAK | Full-Text | 245-254 | |
| Jean-Yves Lionel Lawson; Ahmad-Amr Al-Akkad; Jean Vanderdonckt; Benoit Macq | |||
| In this paper we present an extensible software workbench for supporting the
effective and dynamic prototyping of multimodal interactive systems. We
hypothesize the construction of such applications to be based on the assembly
of several components, namely various and sometimes interchangeable modalities
at the input, fusion-fission components, and also several modalities at the
output. Successful realization of advanced interactions can benefit from early
prototyping and the iterative implementation of design requires the easy
integration, combination, replacement, or upgrade of components. We have
designed and implemented a thin integration platform able to manage these key
elements, and thus provide the research community a tool to bridge the gap of
the current support for multimodal applications implementation. The platform is
included within a workbench offering visual editors, non-intrusive tools,
components and techniques to assemble various modalities provided in different
implementation technologies, while keeping a high level of performance of the
integrated system. Keywords: component-based architecture, multimodal interfaces, multimodal software
architecture, prototyping, reusable software component | |||
| Personalizing graphical user interfaces on flexible widget layout | | BIBAK | Full-Text | 255-264 | |
| Takuto Yanagida; Hidetoshi Nonaka; Masahito Kurihara | |||
| The authors propose a method for personalizing the flexible widget layout
(FWL) by adjusting the desirability of widgets with a pairwise comparison
method, and show its implementation and that it actually works. Personalization
of graphical user interfaces (GUIs) is important from a perspective of
usability, and it is a challenge in the field of model-based user interface
designs. The FWL is a model- and optimization-based layout framework of GUIs
offering a possibility for personalization, but it has not actually realized it
with any concrete method yet. In this paper, the authors implement a method for
personalization as a dialog box and incorporate it into the existing system of
the FWL; thus, users can personalize layouts generated by the FWL system at
run-time. Keywords: adaptive user interfaces, flexible widget layouts, fuzzy constraint
satisfaction problems, optimization, personalization of graphical user
interfaces | |||
| GT/SD: performance and simplicity in a groupware toolkit | | BIBAK | Full-Text | 265-274 | |
| Brian de Alwis; Carl Gutwin; Saul Greenberg | |||
| Many tools exist for developing real-time distributed groupware, but most of
these tools focus primarily on the performance of the resulting system, or on
simplifying the development process. There is a need, however, for groupware
that is both easy to build and that performs well on real-world networks. To
better support this combination, we present a new toolkit called GT/SD. It
combines new and existing solutions to address the problems of real-world
network performance without sacrificing the simple programming approach needed
for rapid prototyping. GT/SD builds on the successes of earlier groupware
toolkits and game networking libraries, and implements seven ideas that help
solve problems of network delay, quality of service, rapid development,
flexibility, and testing. Keywords: extensibility, groupware, network programming, toolkits | |||
| Fiia: user-centered development of adaptive groupware systems | | BIBAK | Full-Text | 275-284 | |
| Christopher Wolfe; T. C. Nicholas Graham; W. Greg Phillips; Banani Roy | |||
| Adaptive groupware systems support changes in users' locations, devices,
roles and collaborative structure. Developing such systems is difficult due to
the complex distributed systems programming involved. In this paper, we
introduce Fiia, a novel architectural style for groupware. Fiia is
user-centered, in that it allows easy specification of groupware structured
around users' settings, devices and applications, and where adaptations are
specified at a high level similar to scenarios. The Fiia.Net toolkit
automatically maps Fiia architectures to a wide range of possible distributed
systems, under control of an annotation language. Together, these allow
developers to work at a high level, while retaining control over distribution
choices. Keywords: groupware architecture, groupware development toolkit | |||
| MundoMonkey: customizing interaction with web applications in interactive spaces | | BIBAK | Full-Text | 285-290 | |
| Daniel Schreiber; Melanie Hartmann; Max Mühlhäuser | |||
| We notice an increasing usage of web applications in interactive spaces, a
variant of ubiquitous computing environments. Interactive spaces feature a
large and dynamically changing number of devices, e.g., an interactive TV set
in the living room that is used with different input devices or an iPhone that
is dynamically federated to devices in the environment. Web applications need a
better way to exploit the resources in the interactive space beyond the
standard input devices like mouse and keyboard, e.g., a speech recognition
device. This paper presents MundoMonkey a web browser extension and programming
API for interactive spaces. The API follows the event based programming
paradigm for allowing web applications and end-user scripts to access the
interactive space. Our approach aligns well with the commonly used programming
style for web applications. We used MundoMonkey to customize the interface of
web applications to user preferences and the interactive space at hand. To our
knowledge our approach is the first to address adaptation of the output as well
as processing of input data. With MundoMonkey the customization is performed
transparently to the application developer by the end-user. Thereby,
MundoMonkey is an alternative to model driven user interface development
approaches. Keywords: dynamic generation/composition of interactive systems, end-user programming
of interactive systems | |||
| Initial evaluation of a bare-hand interaction technique for large displays using a webcam | | BIBAK | Full-Text | 291-296 | |
| Kelvin Cheng; Masahiro Takatsuka | |||
| dTouch is a novel 3D pointing system that allows interaction with large
displays from the use of a single webcam. An initial evaluation demonstrating
the feasibility of our pointing technique is presented. We compared our
prototype with a popular 2D pointing technique, used in the EyeToy game for the
PlayStation console, in a usability study. Result shows that the two techniques
are comparable, each with its pros and cons. We concluded that it is possible
to use our technique and a webcam to allow interaction with large displays.
With further development, our technique can serve as a basis for the design of
the next generation interactive monocular vision systems, due to the added
flexibility to the user's location. Keywords: hand pointing, large display interaction, monocular computer vision | |||
| A study of GUI representation based on BIFs for enhanced mobile TV | | BIBAK | Full-Text | 297-302 | |
| Hyun-Jeong Yim; Yoon-Chul Choy; Soon-Bum Lim | |||
| Content based interaction is a key factor when creating communications
between viewers and content within enhanced mobile TV. However, it is not easy
to implement enhanced content, including Graphical User Interfaces (GUIs), in
mobile broadcasting environments that are based on Binary Format for Scene
(BIFS). Therefore, we designed and implemented a GUI nodes library that can be
used for content development and show its prototyped contents, with suggested
nodes, on Digital Multimedia Broadcasting (DMB). The results of this study make
it easier to present a GUI in data content and enhance the efficiency of
content development. Keywords: dmb, enhanced data contents, gui, mobile tv, mpeg-4 bifs, node library | |||
| MoLIC designer: towards computational support to hci design with MoLIC | | BIBAK | Full-Text | 303-308 | |
| Ugo Braga Sangiorgi; Simone D. J. Barbosa | |||
| MoLIC, a modeling language for designing interaction as a metaphor of
conversation, was proposed to allow designers to build a blueprint of all
interaction that may take place when the application is used. With the tool
presented on this paper, we are willing to address some questions regarding the
use of MoLIC when designing interactive applications, while it might serve to
raise the critical mass onto Semiotic Engineering among HCI practitioners and
researchers. It is composed of a diagram builder, with syntactic verification
based on the MoLIC language. It also allows designers to bind goals with
interaction paths and helps to compare different design decisions. Keywords: interaction design, interaction modeling, semiotic engineering | |||
| Model-based development of synchronous collaborative user interfaces | | BIBAK | Full-Text | 309-312 | |
| George Vellis | |||
| This paper undertakes with collaborative software development taking into
account requirements emerged from recent progress in technologies relevant to
networks and computing devices. Considering this technological breakthrough,
especially under the light of the consequently sharply growing online virtual
communities, we can deduce that a new substance is given to the software
supporting collaborative practices for multiple environments. In such cases,
one important aspect to consider is the user interfaces (UIs) design supporting
group work appropriately. The results today offer a rich insight to the desired
groupware functionality and the features devised to facilitate such
functionality (i.e., replication models, object sharing, floor control, etc).
On the other hand, very little is known about their capability to facilitate
generation of multi-user interfaces to groupware applications. With the advent
of model-based user interface engineering, which signifies a move towards
transformation-based approaches to generating the user interface, one challenge
is bridging across these two perspectives. The current work seeks to contribute
to this goal by identifying the type of models needed to capture collaborative
behavior in synchronous multiple user interface settings as well as generating
the collaborative user interface by making use of suitable platform-oriented
architectural models. Keywords: model-based ui development, multi-user interfaces, synchronous groupware,
user interface description languages (uidl) | |||
| Managing non-native widgets in model-based UI engineering | | BIBAK | Full-Text | 313-316 | |
| Dimitrios Kotsalis | |||
| This paper sets out to describe on-going research and development activities
aiming to provide new insights to building advanced user interfaces by
assembling diverse widgets. To this end, we draw upon the relative merits and
drawbacks of the two dominant approaches for developing interactive
applications, namely toolkit programming and model-based user interface
engineering. We motivate the problem by considering a simple example
representative of what toolkit programming may deliver and then contrast its
implications on prevailing model-based UI principles and practice. Our analysis
reveals the key role of widget abstraction in developing specification-based
tools to manage radically different widget sets in a uniform manner. The
ultimate goal of this work is to extend MBUI engineering approaches so as to
enable them to take account of richer interaction vocabularies becoming
increasingly available. Keywords: creativity, model-based UI development, specifications, toolkits | |||
| Helping software architects design for usability | | BIBAK | Full-Text | 317-320 | |
| Elspeth Golden | |||
| In spite of the goodwill and best efforts of software engineers and
usability professionals, systems continue to be built and released with glaring
usability flaws that are costly and difficult to fix after the system has been
designed and/or built. Although user interface (UI) designers, be they
usability or design experts, communicate usability requirements to software
development teams, usability features often fail to be implemented as expected.
If, as seems likely, software developers intend to implement what UI designers
specify and simply do not know how to interpret the architectural ramifications
of usability requirements, then Usability-Supporting Architectural Patterns
(USAPs) will help to bridge the gap between UI designers and software engineers
to produce software architecture solutions that successfully address usability
requirements. USAPs achieve this goal by embedding usability concepts in
templates that can be used procedurally to guide software engineers' thinking
during the complex task of software architecture design. A tool design supports
delivery of USAPs to software architects for use in the early stages of the
design process. Keywords: software architecture, usability | |||
| Semi-automatic multimodal user interface generation | | BIBAK | Full-Text | 321-324 | |
| Dominik Ertl | |||
| Multimodal applications are typically developed together with their user
interfaces, leading to a tight coupling. Additionally, human-computer
interaction is often less considered. This potentially results in a worse user
interface when additional modalities have to be integrated and/or the
application shall be developed for a different device. A promising way of
creating multimodal user interfaces with less effort for applications running
on several devices is semi-automatic generation. This work shows the generation
of multimodal interfaces where a discourse model is transformed to different
automatically rendered modalities. It supports loose coupling of the design of
human-computer interaction and the integration of specific modalities. The
presented communication platform utilizes this transformation process. It
allows for high-level integration of input like speech, hand gesture and a
WIMP-UI. The generation of output is possible with the modalities speech and
GUI. Integration of other input and output modalities is supported as well.
Furthermore, the platform is applicable for several applications as well as
different devices, e.g., PDAs and PCs. Keywords: discourse, model transformation, multimodal ui generation | |||
| Model-driven approach for user interface: business alignment | | BIBAK | Full-Text | 325-328 | |
| Kênia Sousa | |||
| Organizations that adopt Business Process (BP) modeling as a source to
implement enterprise systems struggle to maintain such a link. However, not all
types of organizations are structured for professionals to adequately manage
processes and supporting systems. Even though there are techniques to align
business processes and systems, there lacks a solution that addresses User
Interfaces (UI). The negative impact of focusing only on functional aspects is
that many changes on processes that affect UIs are not carefully considered.
Therefore, our solution aims at aligning business processes with UIs by
adopting a model-driven approach. Such support is targeted at large
organizations to enable them to manage those links. Keywords: business process modeling, model-driven engineering, requirements
engineering, user-centered design | |||
| Adding flexibility in the model-driven engineering of user interfaces | | BIBAK | Full-Text | 329-332 | |
| Nathalie Aquino | |||
| Model-based user interface (UI) development environments are aimed at
generating one or many UIs from a single model or a family of models.
Model-driven engineering (MDE) of UIs is assumed to be superior to those
environments since they make the UI design knowledge visible, explicit, and
external, for instance as model-to-model transformations and model-to-code
compilation rules. These transformations and rules are often considered
inflexible, complex to express, and hard to develop by UI designers and
developers who are not necessarily experts in MDE. In order to overcome these
shortcomings, this work introduces the concept of transformation profile that
consists of two definitions: model mappings, which connect source and target
models in a flexible way, and transformation templates, which gather high-level
parameters to apply to transformations. This work applies these concepts in a
general-purpose method for MDE of information systems. Transformation profiles
can be effectively and efficiently used in any circumstances in which
transformation knowledge needs to be modified by non-experts, and flexibility,
modifiability, and customization are required. Keywords: model transformation, model-driven engineering, profile, template, user
interface | |||
| High level data fusion on a multimodal interactive application platform | | BIBAK | Full-Text | 333-336 | |
| Hildeberto Mendonça | |||
| This research aims to propose a multimodal fusion framework for high-level
data integration between two or more modalities. It takes as input extracted
low level features from different system devices, analyzes and identifies
intrinsic meanings in these data through dedicated processes running in
parallel. Extracted meanings are mutually compared to identify
complementarities, ambiguities and inconsistencies to better understand the
user intention when interacting with the system. The whole fusion lifecycle
will be described and evaluated in an ambient intelligence scenario, where two
co-workers interact by voice and movements, demonstrating their intentions and
the system gives advices according to identified needs. Keywords: ambient intelligence, context-sensitive interaction, multimodal fusion,
speech recognition | |||