HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607080910111213-113-2

Proceedings of the 2004 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:John Riedl; Anthony Jameson; Daniel Billsus; Tessa Lau
Location:Funchal, Madeira, Portugal
Dates:2004-Jan-13 to 2004-Jan-16
Publisher:ACM
Standard No:ACM ISBN 1-58113-815-6 ACM Order Number 608040; ACM DL: Table of Contents hcibib: IUI04
Papers:88
Pages:382
Links:Conference Home Page
  1. Plenary Talks
  2. Intelligent tutoring
  3. User modeling I
  4. Virtual environments & stories
  5. Dialogue
  6. Automated user interface generation
  7. Intelligent assistance
  8. Multi-platform interfaces
  9. Novel interaction modalities I
  10. Novel interaction modalities II
  11. User modeling II
  12. Short Papers
  13. Demonstrations
  14. Workshops

Plenary Talks

Designing intimate experiences BIBFull-Text 2-3
  Sidney Fels
User Interface, or User Interference? BIBFull-Text 4
  Alan Kay

Intelligent tutoring

Building and evaluating an intelligent pedagogical agent to improve the effectiveness of an educational game BIBAFull-Text 6-13
  Cristina Conati; Xiaohong Zhao
Electronic educational games can be highly entertaining, but studies have shown that they do not always trigger learning. To enhance the effectiveness of educational games, we propose intelligent pedagogical agents that can provide individualized instruction integrated with the entertaining nature of the games. In this paper, we describe one such agent, that we have developed for Prime Climb, an educational game on number factorization. The Prime Climb agent relies on a probabilistic student model to generate tailored interventions aimed at helping students learn number factorization through the game. After describing the functioning of the agent and the underlying student model, we report the results of an empirical study that we performed to test the agent's effectiveness.
A collaborative intelligent tutoring system for medical problem-based learning BIBAFull-Text 14-21
  Siriwan Suebnukarn; Peter Haddawy
This paper describes COMET, a collaborative intelligent tutoring system for medical problem-based learning. The system uses Bayesian networks to model individual student knowledge and activity, as well as that of the group. It incorporates a multi-modal interface that integrates text and graphics so as to provide a rich communication channel between the students and the system, as well as among students in the group. Students can sketch directly on medical images, search for medical concepts, and sketch hypotheses on a shared workspace. The prototype system incorporates substantial domain knowledge in the area of head injury diagnosis. A major challenge in building COMET has been to develop algorithms for generating tutoring hints. Tutoring in PBL is particularly challenging since the tutor should provide as little guidance as possible while at the same time not allowing the students to get lost. From studies of PBL sessions at a local medical school, we have identified and implemented eight commonly used hinting strategies. We compared the tutoring hints generated by COMET with those of experienced human tutors. Our results show that COMET's hints agree with the hints of the majority of the human tutors with a high degree of statistical agreement (McNemar test, p = 0.652, Kappa = 0.773).

User modeling I

Designing example-critiquing interaction BIBAFull-Text 22-29
  Boi Faltings; Pearl Pu; Marc Torrens; Paolo Viappiani
In many practical scenarios, users are faced with the problem of choosing the most preferred outcome from a large set of possibilities. As people are unable to sift through them manually, decisions support systems are often used to automatically find the optimal solution. A crucial requirement for such a system is to have an accurate model of the user's preferences. Studies have shown that people are usually unable to accurately state their preferences up front, but are greatly helped by seeing examples of actual solutions. Thus, several researchers have proposed preference elicitation strategies based on example critiquing. The essential design question in example critiquing is what examples to show users in order to best help them locate their most preferred solution. In this paper, we analyze this question based on two requirements. The first is that it must stimulate the user to express further preferences by showing the range of alternatives available. The second is that the examples that are shown must contain the solution that the user would consider optimal if the currently expressed preference model was complete so that he select it as a final solution.
Supporting user hypotheses in problem diagnosis BIBAFull-Text 30-37
  Earl J. Wagner; Henry Lieberman
People are performing increasingly complicated actions on the web, such as automated purchases involving multiple sites. Things often go wrong, however, and it can be difficult to diagnose a problem in a complex process. Information must be integrated from multiple sites before relations among processes and data can be visualized and understood. Once the source of a problem has been diagnosed, it can be tedious to explain the process of diagnosis to others, and difficult to review the steps later. We present a web interface agent, Woodstein, that monitors user actions on the web and retrieves related information to assemble an integrated view of an action. It manages user hypotheses during problem diagnosis by capturing users' judgments of the correctness of data and processes. These hypotheses can be shared with others, including customer service representatives, or accessed later. We will see this feature in the context of diagnosing problems on the web, and discuss its broader applicability to system interfaces in general.
What would they think?: a computational model of attitudes BIBAFull-Text 38-45
  Hugo Liu; Pattie Maes
A key to improving at any task is frequent feedback from people whose opinions we care about: our family, friends, mentors, and the experts. However, such input is not usually available from the right people at the time it is needed most, and attaining a deep understanding of someone else's perspective requires immense effort. This paper introduces a technological solution. We present a novel method for automatically modeling a person's attitudes and opinions, and a proactive interface called "What Would They Think?" which offers the just-in-time perspectives of people whose opinions we care about, based on whatever the user happens to be reading or writing. In the application, each person is represented by a "digital persona," generated from an automated analysis of personal texts (e.g. weblogs and papers written by the person being modeled) using natural language processing and commonsense-based textual-affect sensing. In user studies, participants using our application were able to grasp the personalities and opinions of a panel of strangers more quickly and deeply than with either of two baseline methods. We discuss the theoretical and pragmatic implications of this research to intelligent user interfaces.

Virtual environments & stories

Narrative event adaptation in virtual environments BIBAFull-Text 46-53
  Karl E. Steiner; Jay Tomkins
There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation -- events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.
Qualitative physics in virtual environments BIBAFull-Text 54-61
  Marc Cavazza; Simon Hartley; Jean-Luc Lugrin; Mikael Le Bras
In this paper, we describe a new approach to the creation of virtual environments, which uses qualitative physics to implement object behaviour. We adopted Qualitative Process Theory as a qualitative reasoning formalism, due to its representational properties (e.g., its orientation towards process ontologies and its explicit formulation of process' pre-conditions). The system we describe is developed using a game engine and takes advantage of its event-based system to integrate qualitative process simulation in an interactive fashion. We use a virtual kitchen as a test environment. In this virtual world, we have implemented various behaviours: physical object behaviour, complex device behaviour (appliances) and "alternative" (i.e. non-realistic) behaviours, which can all be simulated in user real-time. After a presentation of the system architecture and its implementation, we discuss example results from the prototype. This approach has potential applications in simulation and training, as well as in entertainment and digital arts. This work also constitutes a test case for the integration of an Artificial Intelligence technique into 3D user interfaces.
Story fountain: intelligent support for story research and exploration BIBAFull-Text 62-69
  Paul Mulholland; Trevor Collins; Zdenek Zdrahal
Increasingly heritage institutions are making digital artifacts available to the general public and research groups to promote the active exploration of heritage and encourage visits to heritage sites. Stories, such as folklore and first person accounts form a useful and engaging heritage resource for this purpose. Story Fountain provides intelligent support for the exploration of digital stories. The suite of functions provided in Story Fountain together support the investigation of questions and topics that require the accumulation, association or induction of information across the story archive. Story Fountain provides specific support toward this end such as for comparing and contrasting story concepts, the presentation of story paths between concepts, and mapping stories and events according to properties such as who met whom and who lived where.

Dialogue

A probabilistic approach to reference resolution in multimodal user interfaces BIBAFull-Text 70-77
  Joyce Y. Chai; Pengyu Hong; Michelle X. Zhou
Multimodal user interfaces allow users to interact with computers through multiple modalities, such as speech, gesture, and gaze. To be effective, multimodal user interfaces must correctly identify all objects which users refer to in their inputs. To systematically resolve different types of references, we have developed a probabilistic approach that uses a graph-matching algorithm. Our approach identifies the most probable referents by optimizing the satisfaction of semantic, temporal, and contextual constraints simultaneously. Our preliminary user study results indicate that our approach can successfully resolve a wide variety of referring expressions, ranging from simple to complex and from precise to ambiguous ones.
Where to look: a study of human-robot engagement BIBAFull-Text 78-84
  Candace L. Sidner; Cory D. Kidd; Christopher Lee; Neal Lesh
This paper reports on a study of human subjects with a robot designed to mimic human conversational gaze behavior in collaborative conversation. The robot and the human subject together performed a demonstration of an invention created at our laboratory; the demonstration lasted 3 to 3.5 minutes. We briefly discuss the robot architecture and then focus the paper on a study of the effects of the robot operating in two different conditions. We offer some conclusions based on the study about the importance of engagement for 3D IUIs. We will present video clips of the subject interactions with the robot at the conference.
Exploiting emotions to disambiguate dialogue acts BIBAFull-Text 85-92
  Wauter Bosma; Elisabeth Andre
This paper describes an attempt to reveal the user's intention from dialogue acts, thereby improving the effectiveness of natural interfaces to pedagogical agents. It focuses on cases where the intention is unclear from the dialogue context or utterance structure, but where the intention may still be identified using the emotional state of the user. The recognition of emotions is based on physiological user input. Our initial user study gave promising results that support our hypothesis that physiological evidence of emotions could be used to disambiguate dialogue acts. This paper presents our approach to the integration of natural language and emotions as well as our first empirical results, which may be used to endow interactive agents with emotional capabilities.

Automated user interface generation

SUPPLE: automatically generating user interfaces BIBAFull-Text 93-100
  Krzysztof Gajos; Daniel S. Weld
In order to give people ubiquitous access to software applications, device controllers, and Internet services, it will be necessary to automatically adapt user interfaces to the computational devices at hand (eg, cell phones, PDAs, touch panels, etc.). While previous researchers have proposed solutions to this problem, each has limitations. This paper proposes a novel solution based on treating interface adaptation as an optimization problem. When asked to render an interface on a specific device, our supple system searches for the rendition that meets the device's constraints and minimizes the estimated effort for the user's expected interface actions. We make several contributions: 1) precisely defining the interface rendition problem, 2) demonstrating how user traces can be used to customize interface rendering to particular user's usage pattern, 3) presenting an efficient interface rendering algorithm, 4) performing experiments that demonstrate the utility of our approach.
Evaluation of visual balance for automated layout BIBAFull-Text 101-108
  Simon Lok; Steven Feiner; Gary Ngai
Layout refers to the process of determining the size and position of the visual objects in an information presentation. We introduce the WeightMap, a bitmap representation of the visual weight of a presentation. In addition, we present algorithms that use WeightMaps to allow an automated layout system to evaluate the effectiveness of its layouts. Our approach is based on the concepts of visual weight and visual balance, which are fundamental to the visual arts. The objects in the layout are each assigned a visual weight, and a WeightMap is created that encodes the visual weight of the layout. Image-processing techniques, including pyramids and edge detection, are then used to efficiently analyze the WeightMap for balance. In addition, derivatives of the sums of the rows and columns are used to generate suggestions for how to improve the layout.

Intelligent assistance

Sheepdog: learning procedures for technical support BIBAFull-Text 109-116
  Tessa Lau; Lawrence Bergman; Vittorio Castelli; Daniel Oblinger
Technical support procedures are typically very complex. Users often have trouble following printed instructions describing how to perform these procedures, and these instructions are difficult for support personnel to author clearly. Our goal is to learn these procedures by demonstration, watching multiple experts performing the same procedure across different operating conditions, and produce an executable procedure that runs interactively on the user's desktop. Most previous programming by demonstration systems have focused on simple programs with regular structure, such as loops with fixed-length bodies. In contrast, our system induces complex procedure structure by aligning multiple execution traces covering different paths through the procedure. This paper presents a solution to this alignment problem using Input/Output Hidden Markov Models. We describe the results of a user study that examines how users follow printed directions. We present Sheepdog, an implemented system for capturing, learning, and playing back technical support procedures on the Windows desktop. Finally, we empirically evalute our system using traces gathered from the user study and show that we are able to achieve 73% accuracy on a network configuration task using a procedure trained by non-experts.
What role can adaptive support play in an adaptable system? BIBAFull-Text 117-124
  Andrea Bunt; Cristina Conati; Joanna McGrenere
As computer applications become larger with every new version, there is a growing need to provide some way for users to manage the interface complexity. There are three different potential solutions to this problem: 1) an adaptable interface that allows users to customize the application to suit their needs; 2) an adaptive interface that performs the adaptation for the users; or 3) a combination of the adaptive and adaptable solutions, an approach that would be suitable in situations where users are not customizing effectively on their own. In this paper we examine what it means for users to engage in effective customization of a menu-based graphical user interface. We examine one aspect of effective customization, which is how characteristics of the users' tasks and customization behaviour affect their performance on those tasks. We do so by using a process model simulation based on cognitive modelling that generates quantitative predictions of user performance. Our results show that users can engage in customization behaviours that vary in efficiency. We use these results to suggest how adaptive support could be added to an adaptable interface to improve the effectiveness of the users' customization.
An intelligent assistant for interactive workflow composition BIBAFull-Text 125-131
  Jihie Kim; Marc Spraragen; Yolanda Gil
Complex applications in many areas, including scientific computations and business-related web services, are created from collections of components to form computational workflows. In many cases end users have requirements and preferences that depend on how the workflow unfolds, and that cannot be specified beforehand. Workflow editors enable users to formulate workflows, but the editors need to be augmented with intelligent assistance in order to help users in several key aspects of the task, namely: 1) keeping track of detailed constraints across selected components and their connections; 2) specifying the workflow flexibly, e.g., top-down, bottom-up, from requirements, or from available data; and 3) taking partial or incomplete descriptions of workflows and understanding the steps needed for their completion. We present an approach that combines knowledge bases (that have rich representations of components) together with planning techniques (that can track the relations and constraints among individual steps). We illustrate the approach with an implemented system called CAT (Composition Analysis Tool) that analyzes workflows and generates error messages and suggestions in order to help users compose complete and consistent workflows.

Multi-platform interfaces

Flexible re-engineering of web sites BIBAFull-Text 132-139
  Laurent Bouillon; Jean Vanderdonckt; Kwok Chieu Chow
Re-engineering transforms a final user interface into a logical representation that is manipulable enough to allow forward engineering to port a UI from one computing platform to another with maximum flexibility and minimal effort. Re-engineering is used to adapt a UI to another context. This adaptation is governed by two main tasks: the adaptation of the code itself to the new computing platform and the redesign of the UI to better suit the new constraints of the target platform (interaction capabilities, screen size,...). To support this process, we have developed a reverse engineering tool that allows a flexible recovery of the presentation model from Web sites, adapting the reverse engineering to the target platforms, and a forward engineering tool that converts this model into any final executable UI, in particular expressed in VRML, WML, ...
Graceful degradation of user interfaces as a design method for multiplatform systems BIBAFull-Text 140-147
  Murielle Florins; Jean Vanderdonckt
This paper introduces and describes the notion of graceful degradation as a method for supporting the design of user interfaces for multiplatform systems when the capabilities of each platform are very different. The approach is based on a set of transformational rules applied to a single user interface designed for the less constraint platform. A major concern of the graceful degradation approach is to guarantee a maximal continuity between the platform specific versions of the user interface. In order to guarantee the continuity property, a priority ordering between rules is proposed. That ordering permits to apply first the rules with a minimal impact on the multiplatform system continuity.
Flexible interface migration BIBAFull-Text 148-155
  Renata Bandelloni; Fabio Paterno
The goal of this work is to provide users immersed in a multi-platform environment with the possibility of interacting with an application while freely moving from one device to another. We describe the solution that we have developed for a service to support platform-aware runtime migration for Web applications. This allows users interacting with an application to change device and continue their interaction from the same point. The service performs the migration of the application taking into account its runtime state and adapting the application interface to the features of the target platforms. The service is optimized for applications developed through a model-based, multiple-level approach. The intelligence of the adaptive interfaces resides in the migration server, which adapts data collected at runtime from their original format to the format best fitting the features of the target platform. We also indicate how it is possible to extend this result in order to support partial migration and synergistic access, by which a part of the user interface is kept on one device during runtime and the remaining part is moved to another with different characteristics.

Novel interaction modalities I

Robust sketched symbol fragmentation using templates BIBAFull-Text 156-160
  Heloise Hse; Michael Shilman; A. Richard Newton
Analysis of sketched digital ink is often aided by the division of stroke points into perceptually-salient fragments based on geometric features. Fragmentation has many applications in intelligent interfaces for digital ink capture and manipulation, as well as higher-level symbolic and structural analyses. It is our intuitive belief that the most robust fragmentations closely match a user's natural perception of the ink, thus leading to more effective recognition and useful user feedback. We present two optimal fragmentation algorithms that fragment common geometries into a basis set of line segments and elliptical arcs. The first algorithm uses an explicit template in which the order and types of bases are specified. The other only requires the number of fragments of each basis type. For the set of symbols under test, both algorithms achieved 100% fragmentation accuracy rate for symbols with line bases, >99% accuracy for symbols with elliptical bases, and >90% accuracy for symbols with mixed line and elliptical bases.
The connected user interface: realizing a personal situated navigation service BIBAFull-Text 161-168
  Antonio Kruger; Andreas Butz; Christian Muller; Christoph Stahl; Rainer Wasinger; Karl-Ernst Steinberg; Andreas Dirschl
Navigation services can be found in different situations and contexts: while connected to the web through a desktop PC, in cars, and more recently on PDAs while on foot. These services are usually well designed for their specific purpose, but fail to work in other situations. In this paper we present an approach that connects a variety of specialized user interfaces to achieve a personal navigation service spanning different situations. We describe the concepts behind the \bf BPN (BMW Personal Navigator), an entirely implemented system that combines a desktop event and route planner, a car navigation system, and a multi-modal, in- and outdoor pedestrian navigation system for a PDA. Rather than designing for one unified UI, we focus on connecting specialized UIs for desktop, in-car and on-foot use.

Novel interaction modalities II

Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera BIBAFull-Text 169-176
  Norimichi Ukita; Masatsugu Kidode
We propose the Wearable Virtual Tablet (WVT), where a user can draw a locus on a common object with a plane surface (e.g., a notebook and a magazine) with a fingertip. Our previous WVT[1], however, could not work on a plane surface with complicated texture patterns: Since our WVT employs an active-infrared camera and the reflected infrared rays vary depending on patterns on a plane surface, it is difficult to estimate the motions of a fingertip and a plane surface from an observed infrared-image. In this paper, we propose a method to detect and track their motions without interference from colored patterns on a plane surface. (1) To find the region of a plane object in the observed image, four edge lines that compose a rectangular object can be easily extracted by employing the properties of an active-infrared camera. (2) To precisely determine the position of a fingertip, we utilize a simple finger model that corresponds to a finger edge independent of its posture. (3) The system can distinguish whether or not a fingertip touches a plane object by analyzing image intensities in the edge region of the fingertip.
Virtual mouse vision based interface BIBAFull-Text 177-183
  Paul Robertson; Robert Laddaga; Max Van Kleek
A vision-based virtual mouse interface is described that utilizes a robotic head, visual tracking of the users head and hand positions and recognition of user hand signs to control an intelligent kiosk. The user interface supports, among other things, smooth control of the mouse pointer and buttons using hand signs and movements. The algorithms and architecture of real-time vision and robot controller are described.
An intelligent 3D user interface adapting to user control behaviors BIBAFull-Text 184-190
  Tsai-Yen Li; Shu-Wei Hsu
The WALK mode is one of the most common navigation interfaces for 3D virtual environments. However, due to the limited view angle and low frame rate, users are often blocked by obstacles when they navigate in a cluttered virtual scene with such a mode. Intelligent 3D navigation interfaces with assisting mechanisms, such as motion planning methods or virtual force fields, have been proposed in the literature to improve navigation efficiency. Nevertheless, the applicability of these methods is subject to individual discrepancy, and the control parameters of these methods are usually determined by empirical means. In this paper, we propose an intelligent navigation interface with a personalizable assisting mechanism. We have designed two methods, simulation experiment and dynamic adjustment, to find the best control parameters for composing artificial forces for an individual in an off-line and on-line manner, respectively. The simulation experiment method searches for the optimal control parameters for a user in a systematic manner while the dynamic adjustment method makes the assisting mechanism adaptive to user control behaviors as well as environmental variations in real time. Our experiments show that both methods can further improve the navigation efficiency for a wider range of users.

User modeling II

Magpie: supporting browsing and navigation on the semantic web BIBAFull-Text 191-197
  John Domingue; Martin Dzbor
We describe several advanced functionalities of Magpie -- a tool that assists users with interpreting the web resources. Magpie is an extension to the Internet Explorer that automatically creates a semantic layer for web pages using a user-selected ontology. Semantic layers are annotations of a web page, with a set of applicable semantic services attached to the annotated items. We argue that the ability to generate different semantic layers for a web resource is vital to support the interpretation of web pages. Moreover, the assignment of semantic web services to the entities allows users to browse their neighbourhood semantically. At the same time, the Magpie suite offers trigger functionality based on the patterns of an automatically updated semantic log. The benefits of such an approach are illustrated by a semantically enriched browsing history management.
Implicit user profiling for on demand relevance feedback BIBAFull-Text 198-205
  Yoshinori Hijikata
In the area of information retrieval and information filtering, relevance feedback is a popular technique which searches similar documents based on the documents browsed by the user. If the user wants to conduct relevance feedback on demand, which means the user wants to see similar documents while reading a document, the existing user profiling techniques cannot acquire keywords in high precision that the user is interested in at such a short time. This paper proposes a method for extracting text parts which the user might be interested in from the whole text of the Web page based on the user's mouse operation in the Web browser. The objective of this research is to (1) find what kind of mouse operation represent users' interests, (2) see the effectiveness of the found mouse operation in selecting keywords, and (3) compare our method with tf-idf, which is the most fundamental method used in many user profiling systems. From the user experiment, the precision to select keywords of our method is about 1.4 times compared with that of tf-idf.
Evaluating adaptive user profiles for news classification BIBAFull-Text 206-212
  Ricardo Carreira; Jaime M. Crato
Never before have so many information sources been available. Most are accessible on-line and some exist on the Internet alone. However, this large information quantity makes interesting articles hard to find. Modern Personal Digital Assistants (PDAs), mobile phones, and the advent of ubiquitous computing will further complicate matters. Away from the desktop, the time to select important articles might be even harder to find. Strategies to select relevant information are sorely needed. One such strategy is content-based filtering, coupled with User Profiles. Our prototype uses a Bayesian classifier to select articles of interest to a specific user, according to his profile. The articles are extracted from web pages and displayed in a zoomable interface-based browser on a PDA. Interests may change over time, making it important to keep the profile up to date. The system monitors the users' reading behaviors, from which it infers their interest in particular articles and updates the profile accordingly. Results show that, from the start, most articles are correctly classified. An initial profile opposite to the user's actual interests can be reversed in less than ten days, showing the robustness of our approach. A user's interest in an article is inferred with a high degree of accuracy (over 90%).

Short Papers

Speech and sketching for multimodal design BIBAFull-Text 214-216
  Aaron Adler; Randall Davis
While sketches are commonly and effectively used in the early stages of design, some information is far more easily conveyed verbally than by sketching. In response, we have combined sketching with speech, enabling a more natural form of communication. We studied the behavior of people sketching and speaking, and from this derived a set of rules for segmenting and aligning the signals from both modalities. Once the inputs are aligned, we use both modalities in interpretation. The result is a more natural interface to our system.
Leafing digital content BIBAFull-Text 217-219
  A. Barletta; M. Mayer; B. Moser
Today the availability of large digital content archives (video, ebook, audio) creates many problems in terms of user interaction and data manipulation (browsing, searching). Many approaches have been introduced in the past for quickly browsing a digital video library. In this paper we introduce a general framework for representing multimedia contents with a more effective, user-driven, speed-dependent browsing process by using a different user interaction metaphor: the manual/mental process of "leafing through the pages of an illustrated magazine". "Digital Leafing" is a combination of user interactions and speed-dependent data representation of digital contents. In case of video, we conclude that such a solution provides a more neutral and personal browsing and it trades off the complexity of video extraction algorithms with a higher user control.
BioSonics: sensual explorations of a complex system BIBAFull-Text 220-222
  Daniel Bisig
Complex systems abound in nature and are becoming increasingly important in artificial systems. The understanding and controlling of such systems is a major challenge. This paper tries to take a fresh approach to these issues by describing an interactive art project that involves cross-modal interaction with a complex system. By combining sound and vision, the temporal and spatial dynamics of the system are conveyed simultaneously. Users can influence its dynamics in real time by using acoustics. Preliminary experiments with this system show that the combination of sound and vision can help users to obtain an intuitive understanding of the system's behavior. In addition, usability profits from the fact that the same modality is employed for both interaction and feedback.
TUISTER: a tangible UI for hierarchical structures BIBAFull-Text 223-225
  Andreas Butz; Markus Gross; Antonio Kruger
Tangible user interfaces provide access to virtual information through intuitive physical manipulation. However, feedback is mostly provided by displays in the environment instead of the TUI itself. In this paper we describe the design of Tuister, a tangible user interface with multiple embedded displays and sensors. We explain how Tuister can be used to browse and access hierarchical structures and briefly describe the current state of a prototype we're building.
Increasing performances and personalization in the interaction with a call center system BIBAFull-Text 226-228
  Federica Cena; Ilaria Torre
This paper describes the innovative combination of speech recognition and personalized response generation with the adaptive routing of calls to the operator which best fits the caller's features. The project aims at supporting the user incrementally, starting from a personalized automatic support and moving to a proficient human one, when it is needed. In particular the paper shows the adaptive workflow of the answering process and focuses on the principles for providing the personalized speech response.
Handling device diversity through multi-level stylesheets BIBAFull-Text 229-231
  Walter Dees
With the advent of in-home networking and ubiquitous computing, it becomes apparent that we have an increasing need for automatic adaptation of user interfaces to different devices. Many of the techniques to date, have been focusing on making abstract models of the user interface in order to accommodate for a wide range of target devices. However, generating a user interface from only abstract models may result in unattractive and possibly unusable user interfaces. There is a need for navigation and styling attributes that match the characteristics of the target device. Typically, a style-sheet, a presentation model, or a tailor-made adaptation engine is needed for every possible target device. To not have to do that (in full detail) for every possible target device now and in the future, and to support run-time migration of user interfaces, we propose a technique called "multi-level stylesheets". This technique involves specifying style attributes on different levels of abstraction. This enables the re-use of style information for different devices, and will enable the automatic generation of not only usable, but also more attractive user interfaces.
Second messenger: increasing the visibility of minority viewpoints with a face-to-face collaboration tool BIBAFull-Text 232-234
  Joan Morris DiMicco; Walter Bender
This paper introduces the application Second Messenger, a tool for supporting face-to-face meetings and discussions. Second Messenger uses a speech-recognition engine as an input method and outputs filtered keywords from the group's conversation onto an interactive display. The goal of this interface is to improve the quality of a group discussion by increasing the visibility of diverse viewpoints.
A graphical single-authoring framework for building multi-platform user interfaces BIBAFull-Text 235-237
  Yun Ding; Heiner Litz; Dennis Pfisterer
This paper presents our novel graphical single-authoring framework, which automatically creates customized user interfaces (UI) for a variety of devices by reusing an existing UI originally designed for large devices. It distinguishes itself from other authoring frameworks by its intuitive graphical support, UI reuse and its extensibility.
Classifying and assessing tremor movements for applications in man-machine intelligent user interfaces BIBAFull-Text 238-240
  Dan Marius Dobrea; Horia Nicolai Teodorescu
We introduce a new intelligent user interface (IUI) and, also, a new methodology to identify the fatigue state for healthy subjects. The fatigue state is determined by means of a new type of input IUI, named Virtual Joystick. The main goal is to prove the ability of the new IUI system to identify the user's state. We describe the method used in data collecting, the method used to highlight the existence of different physiological and psychic fatigue states reflected by the tremor signal, the classifier system and, finally, the performances we obtained.
Identifying adaptation dimensions in digital talking books BIBAFull-Text 241-243
  Carlos Duarte; Luis Carrico
In this paper we identify adaptation enabling variables, and components that can be adapted, in a setting specific to Digital Talking Books. We then propose an evolution to our Digital Talking Book builder framework, to enable the production of adaptive books.
FAIM: integrating automated facial affect analysis in instant messaging BIBAFull-Text 244-246
  Rana El Kaliouby; Peter Robinson
One of the limitations in traditional instant messaging platforms is that they predominantly rely on text messages as the primary form of expression. This paper presents FAIM, an instant messaging application that analyzes a person's facial affect in real time and augments the dialogue with an emotive character representing them. Throughout the paper, we identify a number of design challenges that arise from integrating facial affect into instant messaging, and discuss how each of these issues is addressed in the design of FAIM. We also present a use case scenario of how FAIM works.
Describing documents: what can users tell us? BIBAFull-Text 247-249
  Daniel Goncalves; Joaquim A. Jorge
With the increasing number of computers per user, it has become common for most users to deal with growing numbers of electronic documents. Those documents are usually stored in hierarchic file systems, requiring them to be classified into the hierarchy, a difficult task. Such organization schemes do not provide adequate support for the efficient and effortless retrieval of documents at a later time, since their position in the hierarchy is one of the only clues to a document's whereabouts. However, humans are natural-born storytellers, and stories help relate and remember important pieces of information. Hence, the usage of narratives where a user "tells a story" about the document will be a valuable tool towards simplifying the retrieval task. To find out if there are common patterns in stories about documents, we performed a study where 60 such stories were collected and analyzed. We identified the most common story elements (time, storage and purpose) and how they are likely to relate in typical stories. This preliminary study suggests that it is possible to infer archetypical stories. Further, we present a set of guidelines for the design of narrative-based document retrieval interfaces.
Don't miss-r --: recommending restaurants through an adaptive mobile system BIBAFull-Text 250-252
  Dina Goren-Bar; Tsvi Kuflik
The present study compares an adaptive simulated cellular-phone based recommender system to a non-adaptive one, in order to evaluate user preferences with respect to system adaptivity. The results show that users prefer the adaptive system over the non adaptive one even after minimal interaction with the system.
Perceptive assistive agents in team spaces BIBFull-Text 253-255
  Lisa D. Harper; Abigail S. Gertner; James A. Van Guilder
Context-aware mobile assistants for optimal interaction: a prototype for supporting the business traveler BIBAFull-Text 256-258
  Patrick Hertzog; Marc Torrens
Travel has many situations where context-aware computing can bring important benefits: pointing out notorious delays or bad weather during the planning phase, allowing the user to replan for handling unexpected situations, or suggesting flight alternatives to avoid strikes. This paper briefly describes an approach for integrating context-aware computing to a mobile travel assistant. We show how context-aware techniques can ease user interaction within mobile device applications. The presented ideas are illustrated with a scenario using a first version of a working prototype called Pocket reality.
Ontology modeling tool with concept dictionary BIBAFull-Text 259-261
  Yoichi Hiramatsu; Seiji Koide
The usefulness of ontology is strongly dependent on the knowledge representation policy and its maintenance. The subject of knowledge representation and modeling tool has been one of the exciting themes among ontology scientists. Some ontology editing tools were born and grew up in the field of expert system and others designed originally by ontology research groups. Key features of the newly implemented tool are: reference to the concept dictionary to find out semantics of the words, and use of inference algorithm provided by Schank's Memory Organization Package. Satisfactory results were obtained in the application of ontology modeled by the present tool. This paper describes the implementation of the tool and its effectiveness in solving some actual problems of enterprise integration.
Leveraging a better interface language to simplify adaptation BIBAFull-Text 262-264
  Joshua Introne; Richard Alterman
We describe an approach to building adaptive groupware systems. This approach encompasses a methodology that reduces the complexity of inferring user intent by identifying a domain-specific interface language that both supports the user's maintenance of common ground, and can be used to drive an adaptive component. Our approach can be framed as follows: 1) Users of same-time different-place collaborative systems must exchange certain types coordination specific information; 2) We can facilitate the exchange and management of this information by introducing special purpose interface components, which we call Coordinating Representations, that structure these communications; 3) Information that is collected through these interface components is particularly well suited to driving intent inferencing procedures; and 4) Intent inference can be used to drive adaptive components that support the collaborative activity. We discuss empirical results from two experiments that validate this methodology.
Task specific eye movements understanding for a gaze-sensitive dictionary BIBAFull-Text 265-267
  Abdelaziz Khiat; Yoshio Matsumoto; Tsukasa Ogasawara
In this paper, we study the relation between the user's degree of understanding and his/her eye movements; in an effort to realize a proactive interface that monitors the user and provides a contextual support. The application is a gaze sensitive dictionary that helps the user when reading a text in a browser's window. Not only is the user's gaze analyzed but also the context and thus the difficulty degree of the text being read. The experiment results suggest using regressions as an indicator to trigger the help process along with a context grounding approach.
A generate and sense approach to automated music composition BIBAFull-Text 268-270
  Sunjung Kim; Elisabeth Andre
Nobody would deny that music may evoke deep and profound emotions. In this paper, we present a perceptual music composition system that aims at the controlled manipulation of a user's emotional state. In contrast to traditional composing techniques, the single components of a composition, such as melody, harmony, rhythm and instrumentation, are selected and combined in a user-specific manner without requiring the user to continuously provide comments on the music employing input devices, such as keyboard or mouse.
Towards a visualization architecture for time-critical applications BIBAFull-Text 271-273
  Jorn Kohlhammer; David Zeltzer
Time-critical domains, such as emergency management, demand fast decisions from expert users under stress. Our Decision-Centered Visualization (DCV) system supports decision making by integrating domain knowledge and knowledge about human situation awareness for time-critical visualization. Efficient information presentation is vital for the user's situation awareness and, in consequence, to his or her task performance. We address the problem of efficiently visualizing both existing and incoming information by connecting domain data types and presentation requirements of the domain's tasks.
Visual data mining and zoomable interfaces BIBAFull-Text 274-276
  Alexander Kort
In this paper an approach for combining a focus+context visual data mining method with zoomable interfaces is shown. Therefore a zoomable interface for analysing structurable image sets was coupled with a visual data mining component. Research questions like interactions and their relationships to associated data, data-driven interaction restrictions and view generation are discussed.
A plan-based mission control center for autonomous vehicles BIBAFull-Text 277-279
  Gary Look; Howard Shrobe
Teams of autonomous vehicles (AVs) carry out missions in a number of fields such as space exploration and search-and-rescue. However, human supervision is still required to monitor the status of the team to ensure that the mission is being carried out as planned. To reduce information overload on these supervisors, we have developed an application, the Mission Control Center (MCC), that aggregates and abstracts status information from AVs using a plan-based view of the mission. Using this model, the MCC presents mission status at the level of goals and plans and directs operator attention to the AVs that require the most attention.
An intelligent dialogue for online rule based expert systems BIBAFull-Text 280-282
  Sascha Mertens; Marius Rosu; Yuliadi Erdani
This paper describes a concept for creating free configurable, intelligent behaving web dialogues for rule based expert systems. Free configurable is meant to indicate, that the dialogue module developed with this concept is domain independent and being configurable without needing means of programming. Intelligent means that it in spite of this independency, it can behave in accordance to the expert system's knowledge and the received user inputs.
PARLING: e-literature for supporting children learning english as a second language BIBAFull-Text 283-285
  Ornella Mich; Elena Betta; Diego Giuliani
In this paper, we describe Parling, a multimedia system for supporting learning of English as a second language (L2). It is devoted to 8-11 year-old primary school children. The idea behind the system is that famous children's literature offers the right motivating and low-anxiety context where users can improve their vocabulary and learn new language structures. The technological core of the system is a speech recognizer that allows implementing automatic pronunciation assessment. Parling is an adaptive system; it features a set of instructional games that change their vocabulary content dynamically based on the user's learning needs. A preliminary usability test of the prototype system gave positive results.
Improving automatic interface generation with smart templates BIBAFull-Text 286-288
  Jeffrey Nichols; Brad A. Myers; Kevin Litwack
One of the challenges of using mobile devices for ubiquitous remote control is the creation of the user interface. If automatically generated designs are used, then they must be close in quality to hand-designed interfaces. Automatically generated interfaces can be dramatically improved if they use standard conventions to which users are accustomed, such as the arrangement of buttons on a telephone dial-pad or the conventional play, stop, and pause icons on a media player. Unfortunately, it can be difficult for a system to determine where to apply design conventions because each appliance may represent its functionality differently. Smart Templates is a technique that uses parameterized templates in the appliance model to specify when such conventions might be automatically applied in the user interface. Our templates easily adapt to existing appliance models, and interface generators on different platforms can apply appropriate design conventions using templates.
Real world sensorization for observing human behavior and its application to behavior-to-speech BIBAFull-Text 289-291
  Yoshifumi Nishida; Koji Kitamura; Hiroshi Aizawa; Toshio Hori; Makoto Kimura; Takeo Kanade; Hiroshi Mizoguchi
This paper describes a method for robustly detecting and efficiently recognizing daily human behavior in the real world. The proposed method involves real-world sensorization using ultrasonic tags to robustly observe behavior, real-world virtualization to create a virtual environment by modeling real objects using a stereovision system, and virtual sensorization of virtualized objects in order to quickly register the handling of objects in the real world and efficiently recognizing specific human behavior. A behavior-to-speech system created based on this recognition method is also presented as a new application of this technology.
Enabling customized & personalized interfaces in mobile computing BIBAFull-Text 292-294
  M. J. O'Grady; G. M. P. O'Hare
Providing an intuitive and ubiquitous interface to those services aimed at the mobile computing community continues to pre-occupy both service providers and certain sections of the research community. In this short paper, we present a snapshot of a system that is under ongoing development and reflect briefly on the initial results of some user evaluations. Based on these, we identify some critical problems with the current implementation and present a design orientated towards significantly improving the end user experience and making the interface more adaptable such that it meets the expectations and requirements of the user.
Object-action association: a HCI design model BIBAFull-Text 295-297
  Robert Pastel; Nathan Skalsky
While developing the Simple Gesturing User Interface (SGUI) API for incorporating simple gestures into personal digital assistant (PDA) user interfaces, we developed an object-based HCI model and design process. The model assists designing direct manipulation interfaces by proposing that all interface objects are amenable to user manipulation. The model extends object-action interface models by emphasizing the relationship between the interface objects and their actions, object-action associations. We delineate three association properties: directness, localness, and appropriateness. We illustrate the design process and the utility of SGUI by developing Experimental Assistant, a graphing software for high school science experiments on the PDA.
Designing interaction experiences for multi-platform service provision with essential use cases BIBAFull-Text 298-300
  Lia Patrocoo; Joao Falcao e Cunha; Raymond P. Fisk; Nuno J. Nunes
This paper addresses the problem of interaction design for service provision to customers in a multi-platform environment. It is based on a qualitative and quantitative study of a Portuguese multi-channel retail bank, and shows that, as most of the financial operations are functionally available across the different service platforms, experience requirements become increasingly influential in customers' usage of the different channels. Different financial services generate different interaction needs, and the fit between experience requirements and channel performance in satisfying those needs has a strong impact on customer channel choices. Based on these findings, essential use cases are applied and extended to capture experience requirements for the different financial operations in a technology independent way. With this approach, interaction designers can identify which platforms are best suited to provide the different services available, improving the multi-channel service as a whole. On the other hand, it also enables the identification of areas of interaction experience that need improvement in each platform, if services offered are likely to be effectively used.
Usability trade-offs for adaptive user interfaces: ease of use and learnability BIBAFull-Text 301-303
  Tim F. Paymans; Jasper Lindenberg; Mark Neerincx
An analysis of context-aware user interfaces shows that adaptation mechanisms have a cost-benefit trade-off for usability. Unpredictable autonomous interface adaptations can easily reduce a system's usability. To reduce this negative effect of adaptive behaviour, we have attempted to help users building adequate mental models of such systems. A user support concept was developed and applied to a context-aware mobile device with an adaptive user interface. The approach was evaluated with users and as expected, the user support improved ease of use, but unexpectedly it reduced learnability. This shows that an increase of ease of use can be realised without actually improving the user's mental model of adaptive systems.
Making critiquing practical: incremental development of educational critiquing systems BIBAFull-Text 304-306
  Lin Qiu; Christopher K. Riesbeck
Expert critiquing systems in education can support teachers in providing high quality individualized feedback to students. These systems, however, require significant development effort before they can be put into use. In this paper, we describe an incremental approach that facilitates the development of educational critiquing systems by integrating manual critiquing with critique authoring. As a result of the integration, the development of critiquing systems becomes an evolutionary process. We describe a system that we built, the Java Critiquer, as an exemplar of our model. Results from real-life usage of the system suggest benefits for supporting teachers in critiquing student code.
Choosing when to interact with learners BIBAFull-Text 307-309
  Lei Qu; Ning Wang; W. Lewis Johnson
In this paper, we describe a method for pedagogical agents to choose when to interact with learners in interactive learning environments. This method is based on observations of human tutors coaching students in on-line learning tasks. It takes into account the focus of attention of the learner, the learner's current task, and expected time required to perform the task. A Bayesian network model combines evidence from eye gaze and interface actions to infer learner focus of attention. The attention model is combined with a plan recognizer to detect different types of learner difficulties such as confusion and indecision which warrant intervention. We plan to incorporate this capability into a pedagogical agent able to interact with learners in socially appropriate ways.
SmartKom mobile: intelligent ubiquitous user interaction BIBAFull-Text 310-312
  Rainer Malaka; Jochen Haeussler; Hidir Aras
This paper presents SmartKom Mobile, the mobile version of the SmartKom system. SmartKom Mobile brings together highly advanced user interaction and mobile computing in a novel way and allows for ubiquitous access to multi-domain information. SmartKom Mobile is device-independent and realizes multi-modal interaction in cars and on mobile devices such as PDAs. With its siblings, SmartKom Home and SmartKom Public, it provides intelligent user interfaces for an extremely broad range of scenarios and environments.
Semantic analysis for a speech user interface in an intelligent tutoring system BIBAFull-Text 313-315
  Yuexi Ren; Mark Hasegawa-Johnson; Stephen E. Levinson
In this paper, we describe the strategy of semantic analysis for a speech user interface that is designed for a multimodal intelligent tutoring system. The semantic analysis involves three phases: semantic parsing, salient words/phrases spotting, and accented word detection. Semantic parsing attempts to represent the recognized sentence with a well-formed semantic frame. The recognized sentence consists of the a posterior most probably hypothesized words given the acoustic evidence, and is compliant with the grammatical knowledge that is represented by a semantic language model. The salient words/phrases are useful when semantic parsing fails. The accented words are useful when the user response is out of our expectations, and assist to make the computer agent smarter and smarter.
The museum visit: generating seamless personalized presentations on multiple devices BIBAFull-Text 316-318
  C. Rocchi; O. Stock; M. Zancanaro; M. Kruppa; A. Kruger
The issue of the seamless interleaving of interaction with a mobile device and stationary devices is addressed, in a typical situation of educational entertainment: the visit to a museum. Some of the salient elements of the described work are the emphasis on multimodality in the dynamic presentation and coherence throughout the visit. The adopted metaphor is of a kind of contextualized TV-like presentation, useful for engaging (young) visitors. On the mobile device, personal video clips are dynamically generated from personalized verbal presentations; on larger stationary screens distributed throughout the museum, further background material and additional information is provided. A virtual presenter follows the visitors in their experience and gives advice on both types of devices and on the museum itself.
A multiple-application conversational agent BIBAFull-Text 319-321
  Steven Ross; Elizabeth Brownholtz; Robert Armes
In this paper, we describe the rationale behind and architecture of a conversational agent capable of speech enabling multiple applications.
Enhancing the interaction with information portals BIBFull-Text 322-324
  Eric Schwarzkopf
Low-fidelity location based information systems BIBAFull-Text 325-327
  Sanjay Sood; Kristian J. Hammond; Larry Birnbaumb
In this article, we describe the intrinsic constraints of mobility and discuss how we can work around and often exploit these constraints using information implied by the context of the mobile user. In particular, we outline some of the work we have been doing in providing information to users on the basis of "Low-Fi" location information retrieved from user's cellular phone. A user's location and other contextual information is used to retrieve task relevant information and avoid many of the problems posed by limits of mobile devices and their use. This is part of an ongoing effort to build "smart" navigational interfaces bound on information about a user's location.
Dates and times in email messages BIBAFull-Text 328-330
  Mia K. Stern
In this paper, we present a user interface which allows users to keep track of calendar related email messages. We provide visual cues for calendar related items, as well as allow users to search these for messages based on the dates and times they contain. We also provide an easier way for users to convert these calendar related messages into actual calendar entries.
WOLD: a mixed-initiative wizard for producing multi-platform user interfaces BIBAFull-Text 331-333
  Julien Stocq; Jean Vanderdonckt
Wold (Wizard fOr Leveraging the Development of multi-platform user interfaces) helps designers to produce running user interfaces to data bases of information systems simultaneously for multiple computing platforms. This software consists of a wizard application guiding designers step by step according to a mixed-initiative approach of production rules structured in a decision tree for choosing appropriate design options covering user interfaces to be produced. The main goal of this software is to speed up the development life cycle according to a transformational approach, spiral life cycle, derivation of user interfaces from data bases structure and queries, intelligent layout derived from data base structure and query. User interfaces are structured and described according to characteristics that remain independent of computing platforms.
Tailored audio augmented environments for museums BIBAFull-Text 334-336
  Lucia Terrenghi; Andreas Zimmermann
The paper deals with the design of an intelligent user interface augmenting the user experience in a museum domain, by providing and immersive audio environment. We focus on the issues concerning multimodal interaction, by taking into account aural-visual perception principles. In addition we highlight the potential of augmenting the visual real environment in a personalized way, thanks to context modeling techniques. The LISTEN project, a system for an immersive audio augmented environment applied in the art exhibition domain, provides an example of modeling and personalization methods affecting the audio interface in terms of content and organization.
Contextual contact retrieval BIBAFull-Text 337-339
  Jonathan Trevor; David M. Hilbert; Daniel Billsus; Jim Vaughan; Quan T. Tran
People routinely rely on physical and electronic systems to remind themselves of details regarding personal and organizational contacts. These systems include rolodexes, directories and contact databases. In order to access details regarding contacts, users must typically shift their attention from tasks they are performing to the contact system itself in order to manually look-up contacts. This paper presents an approach for automatically retrieving contacts based on users' current context. Results are presented to users in a manner that does not disrupt their tasks, but which allows them to access contact details with a single interaction. The approach promotes the discovery of new contacts that users may not have found otherwise and supports serendipity.
Agent wizard: building information agents by answering questions BIBAFull-Text 340-342
  Rattapoom Tuchinda; Craig A. Knoblock
We present a question-answering approach where a user without any programming skills can build information agents by simply answering a series of questions. These resulting agents can perform fairly complex tasks that involve retrieving, filtering integrating and monitoring data from online sources. We evaluated our approach to building agents, which is implemented in a system called the Agent Wizard, by re-implementing a set of agents for monitoring travel that originally took four programmers roughly four days to implement. Using the Agent Wizard, the entire set of agents can be implemented in under 35 minutes.
The semantic of episodes in communication with the anthropomorphic interface agent MAX BIBAFull-Text 343-345
  Ian Voss
This paper presents aspects of the cognitive equipment of an anthropomorphic interface agent. It presents methods for the identification and modification of short-term memory episodes for their immediate use in interaction with the anthropomorphic interface agent MAX (Multimodal Assembly Expert) as well as for storing conceptualisations of modified episodes in long-term memory. This is done with a mapping of a sequence of construction steps to long-term knowledge represented in a semantic net for the identification of an episode on one hand and dynamically augmenting this long-term memory to achieve a learning effect when modifications of episodes occur in discourse on the other hand. Discourse subject is the cooperative assembly of a toy kit aeroplane. The goal of this work is to comply with user expectations of cognitive abilities that arise because of the anthropoid appearance and motor abilities of MAX.
Overriding errors in a speech and gaze multimodal architecture BIBAFull-Text 346-348
  Qiaohui Zhang; Atsumi Imamiya; Kentaro Go; Xiaoyang Mao
This work explores how to use the gaze and the speech command simultaneously to select an object on the screen. Multimodal systems have long been a key mean to reduce the recognition errors of individual components. But the multimodal system generates errors as well. This present study tries to classify the multimodal errors, analyze the reasons causing these errors, and propose the solutions for eliminating them. The goal of this study is to gain insight into multimodal integration errors, and to develop an error self-recoverable multimodal architecture so as to make the error-prone recognition technologies perform at a more stable and robust level within multimodal architecture.

Demonstrations

SUIDT: safe user interface design tool BIBAFull-Text 350-351
  Mickael Baron; Patrick Girard
SUIDT (Safe User Interface Design Tool) is a model-based system that allows building interactive systems with respect to the formal semantics of functional cores. It implements a complete cooperation between task models (abstract and concrete) with both the domain model and the presentation model, while ensuring the properties of the models. Last, it maintains all during the design cycle the links between every part of the system, even the functions of the functional core (the actual code).
A transformation-based environment for designing multi-device interactive applications BIBAFull-Text 352-353
  Silvia Berti; Giulio Mori; Fabio Paterno; Carmen Santoro
The ever-increasing availability of new types of devices raises a number of issues for user interface designers and interactive software developers. We have designed and developed a tool (TERESA), which can be helpful when designing applications accessible through various device types.
Intelligent interaction in art systems BIBAFull-Text 354-355
  Ernest Edmonds; Greg Turner
Artists work with computers and digital media in order to create artworks in complex and varied ways. Collaboration between technologists and artists frequently creates new forms of interaction between artist and computer: it also promotes interaction between technologists and artists. This demonstration shows an example of intelligent interface technology in interactive art.
WinAgent: a system for creating and executing personal information assistants using a web browser BIBAFull-Text 356-357
  Nikeeta Julasana; Akshat Khandelwal; Anupama Lolage; Prabhdeep Singh; Priyanka Vasudevan; Hasan Davulcu; I. V. Ramakrishnan
WinAgent is a software system for creating and executing Personal Information Assistants (PIAs). These are software robots that can locate and extract targeted data buried deep within a web site. They do so by automatically navigating to relevant sites, locating the correct Web pages (which can be either directly accessed by traversing appropriate links or by filling out HTML forms), and extracting, structuring, and organizing data of interest from these pages into XML. The primary thrust of WinAgent technology effort was to make these tools easy-to-use by users who are not necessarily trained in computing. In particular users create and execute PIAs through a Web Browser.
User interface generation with OlivaNova model execution system BIBAFull-Text 358-359
  Pedro J. Molina
This demo proposal shows the capabilities for user interface code generation provided by OlivaNova Model Execution System (ONME). This system is based on the conceptual pattern language Just-UI for user interface specification. The suite comprises a modeling tool, a model validator and code generators to transform specification to source code ready to run.
Demonstrating information in simple gestures BIBAFull-Text 360-361
  Robert Pastel; Nathan Skalsky
We introduce the simple gesturing user interface (SGUI), an application programming interface (API) for designing user interfaces utilizing simple gesturing on the personal digital assistant (PDA). SGUI is particularly appropriate for PDA interfaces because the simple gestures are recognized using minimum processing power and reserve all of the small display for user-task specific information. A graphing-software implemented on a PDA using SGUI illustrates the usability of gesturing interfaces and the information conveyed in a single gesture stroke.
Automated interaction design for command and control of military situations BIBAFull-Text 362-363
  Robin R. Penner; Erik S. Steinmetz
We will demonstrate the SHARED software, which contains an implementation of the Automated Interaction Design (AID) approach to dynamic creation of user interfaces. AID uses multiple agents, multiple models, and productive compositional processes to generate need-based user interfaces within a complex control domain. In addition to demonstrating operational software that responds to military interaction needs, we will present details of the underlying models and operations that support user interface generation in this domain.
Voice user interface principles for a conversational agent BIBAFull-Text 364-365
  Steven Ross; Elizabeth Brownholtz; Robert Armes
In this paper, we describe the user interface principles guiding the design of a conversational agent capable of speech enabling multiple applications, and provide a sample of its typical dialog.
Computer algebra in interface design research BIBAFull-Text 366-367
  Harold Thimbleby; Jeremy Gow
Tools to design, analyse and evaluate user interfaces can be used in user interface design research and in interface modelling research. This demonstration shows two working systems: one in Mathematica that is mathematically sophisticated, and one as a 'conventional' rapid application development environment, where the mathematics is hidden, and which could form the basis of a professional design tool -- but which is based rigorously on the same algebraic formalism.
Demonstration of agent support for user hypotheses in problem diagnosis BIBAFull-Text 368-369
  Earl J. Wagner; Henry Lieberman
We present a web interface agent, Woodstein, that monitors user actions on the web and retrieves related information to assemble an integrated view of a transaction. It manages user hypotheses during diagnosis by capturing users' judgments of the correctness of data and processes. These hypotheses can be shared with others, such as customer service representatives, or saved for later. We will see this feature in the context of diagnosing problems on the web.

Workshops

Workshop on behavior-based user interface customization BIBFull-Text 372-373
  Lawrence Bergman; Tessa Lau
Exploring the design and engineering of mixed reality systems BIBAFull-Text 374-375
  Emmanuel Dubois; Philip Gray; Daniela Trevisan; Jean Vanderdonckt
This IUI'04 workshop is an opportunity to identify and articulate the key research challenges for the design and engineering of mixed reality systems. By clarifying and systematizing these challenges, we can improve our own understanding of the current state of the field, stimulate research activity, especially collaboration, and help establish the design of MR systems as a distinct research area.
Making model-based UI design practical: usable and open methods and tools BIBAFull-Text 376-377
  Hallvard Traetteberg; Pedro J. Molina; Nuno J. Nunes
Model-based IU is an established discipline. However, it has not been adopted in the software industry with the initial expected success, and it has been kept in the meanwhile in the academia. The main aim of this workshop is to analyze the current problems with model-based UI design approaches and envision the main characteristics and challenges to solve in the next generation of model-based UI tools.
Workshop W5: multi-user and ubiquitous user interfaces (MU3I) BIBAFull-Text 378-379
  Andreas Butz; Antonio Kruger; Christian Kray; Albrecht Schmidt
The workshop on Multi-User and Ubiquitous User Interfaces (MU3I) discusses examples of and principles underlying user interfaces for ubiquitous computing and multi-user interfaces. It raises issues such as interface adaptation, resource limitations, and novel interaction techniques. The workshop is held as a full day event and the papers were reviewed by an international program committee. Online proceedings are available at http://www.mu3i.org/.