| Collaborating in context: immersive visualisation environments | | BIBAK | Full-Text | 13-16 | |
| Ross Shannon; Aaron Quigley; Paddy Nixon | |||
| As visualizations of large systems get more and more complex, larger
collaborative spaces are required so that a team of designers may work together
while visualising their system. This paper describes the outfitting of a room
to turn it into an immersive visualisation environment. The environment
consists of large display areas, onto which are projected high resolution
visualisations of software systems. Specialised hardware allows the environment
to support multiple concurrent users, who are encouraged to collaborate on
team-based tasks and interact with the environment using novel interaction
metaphors. We will describe applications that demonstrate the efficacy of this
new approach to collaboration. Keywords: computer-supported cooperative work, information visualisation, visual
interaction, visual interface design | |||
| In context of business | | BIBA | Full-Text | 17-18 | |
| Joerg Beringer | |||
| Context-awareness is usually focused on understanding the state of the user
derived from system-external sensor information like location, noise,
surrounding devices, etc. This information is used to understand user needs or
temporal constraints which the systems has to adapt to.
In business scenarios, context is to a large extent also represented within the system and adds to the understanding of the current situation. With the emergence of smart devices and embedded systems, context will not be just the sum of sensors but will evolve out of the interaction of various smart devices. | |||
| Input interactions and context component based modelisations: differences and similarities | | BIBAK | Full-Text | 19-22 | |
| Diane Lingrand; Michel Riveill | |||
| Since several years, ubiquitous computing and pervasive computing has
emerged and, in particular, context-aware computing. Using mobile devices, the
context is perpetually evolving, more than with standard workstation. In such
environment, software must modify its behavior dynamically. An emerging way of
programming an adaptive software is component programming.
The focus of this paper is to review existing component approaches for input devices and for context especially those that consider component modeling. We aim at determining the similarities and differences between context and input devices in order to propose further a common component model architecture that will help building such intelligent applications. Keywords: context, human computer interaction | |||
| Intelligent context-sensitive interactions on desktop and the web | | BIBAK | Full-Text | 23-27 | |
| Alan Dix; Tiziana Catarci; Benjamin Habegger; Yannis Ioannidis; Azrina Kamaruddin; Akrivi Katifori; Giorgos Lepouras; Antonella Poggi; Devina Ramduny-Ellis | |||
| In this paper we describe briefly three systems: onCue a desktop
internet-access toolbar, Snip!t a web-based bookmarking application and ontoPIM
an ontology-based personal task-management system. These embody context issues
to differing degrees, and we use them to exemplify more general issues
concerning the use of contextual information in 'intelligent' interfaces. We
look at issues relating to interaction and 'appropriate intelligence', at
different types of context that arise and at architectural lessons we have
learnt. We also highlight outstanding problems, in particular the need to
computationally describe and communicate context where reasoning and inference
is distributed. Keywords: context, dynamic interaction, human computer interaction, intelligent
interfaces, natural interaction, user experience | |||
| LCARS: the next generation programming context | | BIBAK | Full-Text | 29-31 | |
| Andreas Heil; Iman Moradi; Torben Weis | |||
| In this paper, we present a high-level graphical language to develop
pervasive applications based on a unique interface design. The language
supports a wide range of programming constructs. Its graphical notation is
based on the LCARS design, which is appealing to different target groups, based
on their specific interests and requirements. We show that users can easily
create pervasive applications using an LCARS-based user interface. The first
step is to describe the technical context in which the application will
execute. Based on this technical context, the UI offers a context-specific set
of visual primitives. By composing these visual primitives on the screen, the
user can specify the behavior of the application. Keywords: VRDK, context, model-driven software engineering, robots, ubiquitous
computing, visual programming languages | |||
| Learning and managing user context in personalized communications services | | BIBAK | Full-Text | 33-36 | |
| Robert Dinoff; Richard Hull; Bharat Kumar; Daniel Lieuwen; Paulo Santos | |||
| A key dimension in personalization of converged (wireless and wireline, web)
communication services is adapting each service to a user's context, and thus
tailoring the services to the daily lives of individual users. The Intuitive
Network Application (INA) framework being developed at Bell Labs uses both
machine learning techniques as well as user feedback to determine a user's
profile and preferences. This paper explores how this information can then be
used by the network to automatically infer a user's context and to tailor the
service behavior to the needs of the user in that context. Keywords: context, learning, personalization, preference palettes, preferences,
privacy | |||
| Model of primary and secondary context | | BIBAK | Full-Text | 37-38 | |
| Erika Reponen; Kristijan Mihalic | |||
| We propose a model of primary and secondary context to help analyse the user
in context with mobile devices. We clarify the model and discuss how it can be
used to define context and to understand what user needs arise in different
situations. While we have used the model to analyse mobile video communication,
the approach is general enough to be applied to other areas of communication as
well. Keywords: communication, context, mobile phones, privacy, publishing, video phones | |||
| Modelling "user understanding" in simple communication tasks | | BIBAK | Full-Text | 39-43 | |
| Heimo Müller; Fritz Wiesinger | |||
| We present an architectural model for adaptive interfaces based on eye-gaze
patterns and facial expression analysis. In our approach, each basic visual
sign can adapt its appearance and level of detail during the communication
process. Atomic Communication Units (ACUs) -- analogous to graphical output
primitives -- encapsulate the intended denotation, the encoding of the message
and a method for the judgment of the communication goal. We have analyzed
feedback cycles in human-human communication tasks, and propose applications
scenarios for ACUs. Keywords: adaptive interfaces, eye-gaze patterns, mental models, visual language | |||
| Privacy-aware user interfaces within collaborative environments | | BIBAK | Full-Text | 45-48 | |
| Elke Franz; Katja Liesebach; Katrin Borcea-Pfitzmann | |||
| The main focus of this paper is to discuss the representation of contextual
information in advanced user interfaces supporting privacy awareness. Thereby,
we especially consider collaborative environments which potentially provide
information about users to everybody acting in the system. Users can apply
Privacy-Enhancing Identity Management (PIM) in order to control which
information they disclose to whom in which situation. However, since PIM must
be done additionally to the actual tasks within the application, it is
questionable whether users will reasonably utilize it. Therefore, a
privacy-aware user interface is an important prerequisite for the broad
acceptance and adequate use of PIM. We discuss which contextual information
should be represented in a collaborative environment and suggest a possible
representation of the selected information. Keywords: collaborative environments, partial identities, privacy-aware user
interface, privacy-enhancing identity management, visualization | |||
| Sticky, smelly, smoky context: experience design in the kitchen | | BIBAK | Full-Text | 49-52 | |
| Lucia Terrenghi | |||
| In this position paper I reflect on the challenges to design, set up and
evaluate a user experience in hybrid contexts, i.e., physical and digital ones,
of everyday life. Keywords: computer supported collaborative cooking, evaluation, human computer
interaction, ubiquitous computing, user experience | |||
| Towards a general purpose user interface for service-oriented context-aware applications | | BIBAK | Full-Text | 53-55 | |
| Torben Weis; Martin Saternus; Mirko Knoll; Alexander Brändle; Marco Combetto | |||
| Today context-aware applications are isolated systems designed for a special
scenario. There is no way to combine different applications, which is common
practice with desktop applications since years. For example, the airline knows
when your plane leaves, your PDA knows your GPS position, VirtualEarth knows
how long you need to the airport, and another service can order a taxi to your
current position. When you manage to combine these services, you will get
informed when you must go to the airport and a taxi is ordered to your current
position. Thus, in the future we need to federate context-data retrieved from
different sources and services on the internet. This imposes several
challenges: (1) We need an architecture that allows us to federate these
services and to communicate with the users. (2) We need tools that allow
programmers to quickly implement and deploy services on the network to generate
a grass-roots movement. (3) We need a general purpose user-interface for such
applications that allows users to deal with context-data and interact with
context-aware services. In this paper we sketch our architecture for
service-oriented context-aware applications. Based on this architecture we
develop a general purpose user interface which is a collage of instant
messenger, roadmap, and web browser. In this paper, we describe the formatting
requirements for the CHI Conference and offer a number of suggestions on
writing style for the worldwide CHI readership. Keywords: context, human computer interaction, ubiquitous computing | |||