| Approaches to Software Engineering: A Human-Centred Perspective | | BIBAK | Full-Text | 1-5 | |
| Liam J. Bannon | |||
| The field of software engineering has been evolving since its inception in
1968. Arguments as to the exact nature of the field, whether it should be
conceived as a real engineering profession, the role of formal methods, whether
it is as much an art as a science, etc., continue to divide both practitioners
and academics. My purpose here is not to debate these particular topics, but
rather to approach the field from the outside, coming as I do from a long
period of involvement in the human and social side of the computing discipline,
namely, from the fields of Human-Computer Interaction, Computer Supported
Cooperative Work, Participative Design, Interaction Design, and Social
Informatics, more generally. I wish to examine how this "human-centred"
perspective might shed a new light on some issues within the SE field, perhaps
opening up topics for further discussion and examination. Keywords: CSCW; human-centred computing; requirements; sociology; software engineering | |||
| The APEX Framework: Prototyping of Ubiquitous Environments Based on Petri Nets | | BIBA | Full-Text | 6-21 | |
| José Luís Silva; Óscar R. Ribeiro; João M. Fernandes; José Creissac Campos; Michael D. Harrison | |||
| The user experience of ubiquitous environments is a determining factor in their success. The characteristics of such systems must be explored as early as possible to anticipate potential user problems, and to reduce the cost of redesign. However, the development of early prototypes to be evaluated in the target environment can be disruptive to the ongoing system and therefore unacceptable. This paper reports on an ongoing effort to explore how model-based rapid prototyping of ubiquitous environments might be used to avoid actual deployment while still enabling users to interact with a representation of the system. The paper describes APEX, a framework that brings together an existing 3D Application Server with CPN Tools. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. The APEX architecture and the proposed CPN-based modelling approach are described. An example illustrates their use. | |||
| Model-Based Design and Implementation of Interactive Spaces for Information Interaction | | BIBAK | Full-Text | 22-37 | |
| Hans-Christian Jetter; Jens Gerken; Michael Zöllner; Harald Reiterer | |||
| Interactive spaces with multiple networked devices and interactive surfaces
are an effective means to support multi-user collocated collaboration. In these
spaces, surfaces like tablet PCs, tabletops, or display walls can be combined
to allow users to interact naturally with their personal or shared information,
e.g. during presentation, discussion, or annotation. However, designing and
implementing such interactive spaces is a challenging task due to the lack of
appropriate interaction abstractions and the shortcomings of current user
interface toolkits. We believe that these challenges can be addressed by
revisiting model-based design techniques for object-oriented user interfaces
(OOUI). We discuss the potential of OOUIs for the design of interactive spaces
and introduce our own object-oriented design and implementation approach.
Furthermore we introduce the ZOIL (Zoomable Object-Oriented Information
Landscape) paradigm that we have used as an experimental testbed. While our
approach does not provide automated model-driven procedures to create user
interfaces without human intervention, we illustrate how it provides efficient
support throughout design and implementation. We conclude with the results from
a case study in which we collected empirical data on the utility and ease of
use of our approach. Keywords: Interactive Spaces; Information Interaction; Zoomable User Interfaces;
Model-based Design | |||
| ViSE -- A Virtual Smart Environment for Usability Evaluation | | BIBAK | Full-Text | 38-45 | |
| Stefan Propp; Peter Forbrig | |||
| Within the research field of HCI task models are widely used for model-based
development of interactive systems. Recently introduced approaches applied task
models further to model cooperative behaviour of people interacting in smart
environments. However there is a lack of usability methods to support the needs
of evaluations during a model-based development process for smart environments.
Particularly during early stages of development building a prototypical
environment for user evaluations is resource consuming. To overcome the
challenges we present a process model and according tool support. We provide
the virtual smart environment ViSE to conduct expert evaluations and user
studies during a user-centred design process, supporting iterative evaluations. Keywords: Model-based Usability Evaluation; Task Models; Smart Environment | |||
| A Domain Specific Language for Contextual Design | | BIBA | Full-Text | 46-61 | |
| Balbir S. Barn; Tony Clark | |||
| This paper examines the role of user-centered design (UCD) approaches to design and implementation of a mobile social software application to support student social workers in their work place. The experience of using a variant of UCD is outlined. The principles and expected norms of UCD raised a number of key lessons. It is proposed that these problems and lessons are a result of the inadequacy of precision of modeling the outcomes of UCD, which prevents model driven approaches to method integration between UCD approaches. Given this, it is proposed that the Contextual Design method is a good candidate for enhancing with model driven principles. A subset of the Work model focussing on Cultural and Flow models are described using a domain specific language and supporting tool built using the MetaEdit+ platform. | |||
| An MDE Approach for User Interface Adaptation to the Context of Use | | BIBAK | Full-Text | 62-78 | |
| Wided Bouchelligua; Adel Mahfoudhi; Lassaad Benammar; Sirine Rebai; Mourad Abed | |||
| With the advent of new media and the delivery of recent means of
communication, associated with the progress of networks, the circumstances of
software use, as well as the skills and the preferences of the users who
exploit them, are constantly varying. The adaptation of the User Interface (UI)
has become a necessity due to the variety of the contexts of use. In this
paper, we propose an approach based on models for the generation of adaptive
UI. To reach this objective, we have made use of parameterized transformation
principle in the framework of the Model Driven Engineering (MDE) for the
transformation of the abstract interface into a concrete interface. The
parameter of this transformation plays the role of the context of use. The
paper develops two parts: meta-models for every constituent of the context of
use and the adaptation rules. Keywords: User Interface; Adaptation; Context of use; Model Driven Engineering;
adaptation rules | |||
| Desktop-to-Mobile Web Adaptation through Customizable Two-Dimensional Semantic Redesign | | BIBAK | Full-Text | 79-94 | |
| Fabio Paternò; Giuseppe Zichittella | |||
| In this paper we present a novel method for desktop-to-mobile adaptation.
The solution also supports end-users in customizing multi-device ubiquitous
user interfaces. In particular, we describe an algorithm and the corresponding
tool support to perform desktop-to-mobile adaptation by exploiting logical user
interface descriptions able to capture interaction semantic information
indicating the purpose of the interface elements. We also compare our solution
with existing tools for similar goals. Keywords: Ubiquitous Applications; Multi-Device Environments; Adaptation | |||
| Extending UsiXML to Support User-Aware Interfaces | | BIBA | Full-Text | 95-110 | |
| Ricardo Tesoriero; Jean Vanderdonckt | |||
| Mobile and portable devices require the definition of new user interfaces (UI) capable of reducing the level of attention required by users to operate the applications they run to improve the calmness of them. To carry out this task, the next generation of UIs should be able to capture information from the context and act accordingly. This work defines an extension to the UsiXML methodology that specifies how the information on the user is modeled and used to customize the UI. The extension is defined vertically through the methodology, affecting all layers of the methodology. In the Tasks & Concepts layer, we define the user environment of the application, where roles and individuals are characterized to represent different user situations. In the Abstract UI layer, we relate groups of these individuals to abstract interaction objects. Thus, user situations are linked to the abstract model of the UI. In the Concrete UI layer, we specify how the information on the user is acquired and how it is related to the concrete components of the UI. This work also presents how to apply the proposed extensions to a case of study. Finally, it discusses the advantages of using this approach to model user-aware applications. | |||
| The Secret Lives of Assumptions: Developing and Refining Assumption Personas for Secure System Design | | BIBA | Full-Text | 111-118 | |
| Shamal Faily; Ivan Fléchais | |||
| Personas are useful for obtaining an empirically grounded understanding of a secure system's user population, its contexts of use, and possible vulnerabilities and threats endangering it. Often, however, personas need to be partly derived from assumptions; these may be embedded in a variety of different representations. Assumption Personas have been proposed as boundary objects for articulating assumptions about a user population, but no methods or tools currently exist for developing and refining these within the context of secure and usable design. This paper presents an approach for developing and refining assumption personas before and during the design of secure systems. We present a model for structuring the contribution of assumptions to assumption personas, together with a process for developing assumption personas founded on this model. We also present some preliminary results based on an application of this approach in a recent case study. | |||
| Dazed and Confused Considered Normal: An Approach to Create Interactive Systems for People with Dementia | | BIBA | Full-Text | 119-134 | |
| Nasim Mahmud; Joël Vogt; Kris Luyten; Karin Slegers; Jan Van den Bergh; Karin Coninx | |||
| In Western society, the elderly represent a rapidly growing demographic group. For this group, dementia has become an important cause of dependencies on others and causes difficulties with independent living. Typical symptoms of the dementia syndrome are decreased location awareness and difficulties in situating ones activities in time, thus hindering long term plans and activities. We present our approach in creating an interactive system tailored for the needs of the early phases of the dementia syndrome. Given the increasing literacy with mobile technologies in this group, we propose an approach that exploits mobile technology in combination with the physical and social context to support prolonged independent living. Our system strengthens the involvement of caregivers through the patient's social network. We show that applications for people suffering from dementia can be created by explicitly taking into account context in the design process. Context dependencies that are defined in an early stage in the development process are propagated as part of the runtime behavior of the interactive system. | |||
| Supporting Multimodality in Service-Oriented Model-Based Development Environments | | BIBAK | Full-Text | 135-148 | |
| Marco Manca; Fabio Paternò | |||
| While multimodal interfaces are becoming more and more used and supported,
their development is still difficult and there is a lack of authoring tools for
this purpose. The goal of this work is to discuss how multimodality can be
specified in model-based languages and apply such solution to the composition
of graphical and vocal interactions. In particular, we show how to provide
structured support that aims to identify the most suitable solutions for
modelling multimodality at various detail levels. This is obtained using,
amongst other techniques, the well-known CARE properties in the context of a
model-based language able to support service-based applications and modern Web
2.0 interactions. The method is supported by an authoring environment, which
provides some specific solutions that can be modified by the designers to
better suit their specific needs, and is able to generate implementations of
multimodal interfaces in Web environments. An example of modelling a multimodal
application and the corresponding, automatically generated, user interfaces is
reported as well. Keywords: Multimodal interfaces; Model-based design; Authoring tools | |||
| RTME: Extension of Role-Task Modeling for the Purpose of Access Control Specification | | BIBAK | Full-Text | 149-157 | |
| Birgit Bomsdorf | |||
| Interactive systems are often developed without taking security concerns
into account. We investigated a combination of both HCI models and access
control specifications to overcome this problem. The motivation of a combined
approach is to narrow the gap between different modeling perspectives and to
provide a coherent mapping of modeling concepts. The general goal is a
systematic introduction and tool support of security concerns in model-based
development of interactive system. In this paper we report results of our work
currently concentrating on the early design steps. The focus of this
presentation is on the specification of task and role hierarchies, conflicting
privileges and related tool support. Keywords: Task modeling; Role modeling; Role task assignment; Tool support; Access
control | |||
| Web Applications Usability Testing with Task Model Skeletons | | BIBAK | Full-Text | 158-165 | |
| Ivo Maly; Zdenek Mikovec | |||
| Usability testing is technique for analysis of the usability problems of
applications, but it requires significant effort to prepare the test and
especially to analyze data collected during the test. New techniques such as
usage of task models were introduced to improve and speed up the test analysis.
Unfortunately, only few applications provide the task model. Therefore, we
propose a method and tools for partial reconstruction of the task list and the
task model called skeleton. This reconstruction is done from the usability
expert's application walkthroughs. The task model skeleton is generated
automatically, but it should provide similar information during the usability
data analysis as manually created full-scaled task model. In order to evaluate
usage of the task model skeleton we conducted a usability study with the web
e-mail client Roundcube. Results show that the task model skeleton can be used
as a good substitution for the manually created task model in usability testing
when full-scaled task model is not available. Keywords: Usability testing; Task list; Task model; Web applications | |||
| Evaluating Relative Contributions of Various HCI Activities to Usability | | BIBAK | Full-Text | 166-181 | |
| Anirudha Joshi; N. L. Sarda | |||
| Several activities related to human-computer interaction (HCI) design are
described in literature. However, it is not clear whether each HCI activity is
equally important. We propose a multi-disciplinary framework to organise HCI
work in phases, activities, methods, roles, and deliverables. Using regression
analyses on data from 50 industry projects, we derive weights for the HCI
activities in proportion to the impact they make on usability, and compare
these with the recommended and assigned weights. The scores of 4 HCI activities
(user studies, user interface design, usability evaluation of the user
interface, and development support) have the most impact on the Usability Goals
Achievement Metric (UGAM) and account for 58% of variation in it. Keywords: HCI activities; design process; weights | |||
| AFFINE for Enforcing Earlier Consideration of NFRs and Human Factors When Building Socio-Technical Systems Following Agile Methodologies | | BIBA | Full-Text | 182-189 | |
| Mohamed Bourimi; Thomas Barth; Joerg M. Haake; Bernd Ueberschär; Dogan Kesdogan | |||
| Nowadays, various user-centered and participatory design methodologies with different degree of agility are followed when building sophisticated socio-technical systems. Even when applying these methods, non-functional requirements (NFRs) are often considered too late in the development process and tension that may arise between users' and developers' needs remains mostly neglected. Furthermore, there is a conceptual lack of guidance and support for efficiently fulfilling NFRs in terms of software architecture in general. This paper aims at introducing the AFFINE framework simultaneously addressing these needs with (1) conceptually considering NFRs early in the development process, (2) explicitly balancing end-users' with developers' needs, and (3) a reference architecture providing support for NFRs. Constitutive requirements for AFFINE were gathered based on experiences from various projects on designing and implementing groupware systems. | |||
| Understanding Formal Description of Pitch-Based Input | | BIBAK | Full-Text | 190-197 | |
| OndÅ ej PoláÄ ek; ZdenÄ>k Míkovec | |||
| The pitch-based input (humming, whistling, singing) in acoustic modality has
already been studied in several projects. There is also a formal description of
the pitch-based input which can be used by designers to define user control of
an application. However, as we discuss in this paper, the formal description
can contain semantic errors. The aim of this paper is to validate the formal
description with designers. We present a tool that is capable of visualizing
vocal commands and detecting semantic errors automatically. We have conducted a
user study that brings preliminary results on comprehension of the formal
description by designers and ability to identify and remove syntactic errors. Keywords: Non-verbal Vocal Interaction; Vocal Gesture; Formal Description; User Study | |||
| Application Composition Driven by UI Composition | | BIBAK | Full-Text | 198-205 | |
| Christian Brel; Philippe Renevier-Gonin; Audrey Occello; Anne-Marie Déry-Pinna; Catherine Faron-Zucker; Michel Riveill | |||
| Ahead of the multiplication of specialized applications, needs for
application composition increase. Each application can be described by a pair
of a visible part -- the User Interface (UI) -- and a hidden part -- the tasks
and the Functional Core (FC). Few works address the problem of application
composition by handling both visible and hidden parts at the same time. Our
proposition described in this paper is to start from the visible parts of
applications, their UIs, to build a new application while using information
coming from UIs as well as from tasks. We base upon the semantic description of
UIs to help the developer merge parts of former applications. We argue that
this approach driven by the composition of UIs helps the user during the
composition process and ensures the preservation of a usable UI for the
resulting application. Keywords: User Interface Composition; Application Composition | |||
| Methods for Efficient Development of Task-Based Applications | | BIBAK | Full-Text | 206-213 | |
| Vaclav Slovacek | |||
| This paper introduces methods for developing task-based applications by
tightly integrating workflows with application logic written in an imperative
programming language and automatically completing workflows especially with
tasks that mediate interaction with users. Developers are then provided with
completed workflow they may be used for further development. Automatic
completion of workflows should enable to significantly shorten the development
process and eliminate repetitive and error-prone development tasks. Information
extracted from workflow structure and low level application logic may then be
used to automatically generate low to high fidelity prototype user interfaces
for different devices and contexts. Keywords: Workflow; workflow processing; task modeling; generated user interface | |||
| Towards an Integrated Model for Functional and User Interface Requirements | | BIBAK | Full-Text | 214-221 | |
| Rabeb Mizouni; Daniel Sinnig; Ferhat Khendek | |||
| Despite the widespread adoption of UML as a standard for modeling software
systems, it does not provide adequate support for specifying User Interface
(UI) requirements. It has become a common practice to separately use UML use
cases for specifying functional requirements and task models for modeling UI
requirements. The lack of integration of these two related models is likely to
introduce redundancies and inconsistencies into the software development
process. In this paper, we propose an integrated model, consisting of use case
and task models, for capturing functional and UI requirements. Both artifacts
are used in a complementary manner and are formally related through so-called
Anchors. Anchors are use case steps that require further elaboration with
UI-specific interactions. These interactions are explicitly captured in
associated task models. The formal semantics of the integrated model is given
with finite state automata. Keywords: Functional Requirements; UML Use Cases; User Interface Requirements; Task
Models; Integrated Requirements Model; Finite State Automata | |||