HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 9596979899000102030405060708091011-111-212-112-213-1

Proceedings of the 2005 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2005 ACM Symposium on User Interface and Software Technology
Editors:Dan R. Olsen, Jr.
Location:Seattle, Washington
Dates:2005-Oct-23 to 2005-Oct-26
Publisher:ACM
Standard No:ISBN 1-59593-271-2; ACM Order Number 429052; ACM DL: Table of Contents hcibib: UIST05
Papers:32
Pages:260
Links:Conference Home Page
  1. Tools
  2. Pointing
  3. Projection
  4. Mobile interfaces
  5. Touch
  6. Mouse taming
  7. Customization 1
  8. Cool stuff
  9. Customization 2
  10. Physical interfaces
Personal computing in the 21st century BIBAFull-Text 1-2
  Gary K. Starkweather
Ever since the dawn of the digital computer, invention, innovation, and creativity have been a hallmark of the industry. The mainframe computer seemed for a while to be the real player with experts or at least highly trained professionals operating these large and expensive machines. Most users were allowed to see them through glass windows but "hands on" was a rare opportunity. In 1972, the Xerox Palo Alto Research Center (PARC), built a remarkable personal computer named the ALTO. Except for the visionaries at PARC and a few others, most people considered the personal computer a mere curiosity in this early period. Today, the personal computer has become a tool that very few imagined. What might be yet to come.
   While prognosticating about the future is a risky endeavor at best, perhaps we can obtain a look ahead with a straightforward review of the current status of personal computing. We will look at operating systems, application software and peripherals, however, the real goal of this talk is to see what the user interface, tools and interactions with this future computing environment might be or perhaps even should be. Will we still be using continuing variations of Doug Englebart's mouse in 2020 or might something new and much more advanced emerge? How might users seamlessly deal with terabytes of storage? How might multi-user environments be used and could multi-OS machines be an economic and generally available personal computing environment? Are there user experience issues that are critical in multi-OS environments? How might the user's display be different from today? Will tomorrow's displays be larger, have a significantly higher pixel density, be much more paper-like, etc.? Might electronic printers and their requisite paper output still be with us by 2025, for example? Will home and neighborhood network resources finally be a powerful ally of the computing environment? Many exciting opportunities and questions beg for answers and industry insight.
   This talk will attempt to peer into the near future to see what we might expect of the personal computing environment based on what we can extrapolate from current experience and technology directions. While the exactitude of such projections may be limited, taken as a whole, there is perhaps much that can be learned from such an exercise. Why do this? Charles Kettering, the great automotive inventor was asked why he spent so much time planning and thinking about the future. He wisely replied, "Because I am going to spend the rest of my life there." Thirty years ago, very few could have imagined all the wonderful things that personal computing has enabled. Perhaps we have just begun our exciting journey.

Tools

Citrus: a language and toolkit for simplifying the creation of structured editors for code and data BIBAFull-Text 3-12
  Andrew J. Ko; Brad A. Myers
Direct-manipulation editors for structured data are increasingly common. While such editors can greatly simplify the creation of structured data, there are few tools to simplify the creation of the editors themselves. This paper presents Citrus, a new programming language and user interface toolkit designed for this purpose. Citrus offers language-level support for constraints, restrictions and change notifications on primitive and aggregate data, mechanisms for automatically creating, removing, and reusing views as data changes, a library of widgets, layouts and behaviors for defining interactive views, and two comprehensive interactive editors as an interface to the language and toolkit itself. Together, these features support the creation of editors for a large class of data and code.
Metisse is not a 3D desktop! BIBAFull-Text 13-22
  Olivier Chapuis; Nicolas Roussel
Twenty years after the general adoption of overlapping windows and the desktop metaphor, modern window systems differ mainly in minor details such as window decorations or mouse and keyboard bindings. While a number of innovative window management techniques have been proposed, few of them have been evaluated and fewer have made their way into real systems. We believe that one reason for this is that most of the proposed techniques have been designed using a low fidelity approach and were never made properly available. In this paper, we present Metisse, a fully functional window system specifically created to facilitate the design, the implementation and the evaluation of innovative window management techniques. We describe the architecture of the system, some of its implementation details and present several examples that illustrate its potential.
Role-based control of shared application views BIBAFull-Text 23-32
  Lior Berry; Lyn Bartram; Kellogg S. Booth
Collaboration often relies on all group members having a shared view of a single-user application. A common situation is a single active presenter sharing a live view of her workstation screen with a passive audience, using simple hardware-based video signal projection onto a large screen or simple bitmap-based sharing protocols. This offers simplicity and some advantages over more sophisticated software-based replication solutions, but everyone has the exact same view of the application. This conflicts with the presenter's need to keep some information and interaction details private. It also fails to recognize the needs of the passive audience, who may struggle to follow the presentation because of verbosity, display clutter or insufficient familiarity with the application.
   Views that cater to the different roles of the presenter and the audience can be provided by custom solutions, but these tend to be bound to a particular application. In this paper we describe a general technique and implementation details of a prototype system that allows standardized role-specific views of existing single-user applications and permits additional customization that is application-specific with no change to the application source code. Role-based policies control manipulation and display of shared windows and image buffers produced by the application, providing semi-automated privacy protection and relaxed verbosity to meet both presenter and audience needs.

Pointing

Distant freehand pointing and clicking on very large, high resolution displays BIBAFull-Text 33-42
  Daniel Vogel; Ravin Balakrishnan
We explore the design space of freehand pointing and clicking interaction with very large high resolution displays from a distance. Three techniques for gestural pointing and two for clicking are developed and evaluated. In addition, we present subtle auditory and visual feedback techniques to compensate for the lack of kinesthetic feedback in freehand interaction, and to promote learning and use of appropriate postures.
Interacting with large displays from a distance with vision-tracked multi-finger gestural input BIBAFull-Text 43-52
  Shahzad Malik; Abhishek Ranjan; Ravin Balakrishnan
We explore the idea of using vision-based hand tracking over a constrained tabletop surface area to perform multi-finger and whole-hand gestural interactions with large displays from a distance. We develop bimanual techniques to support a variety of asymmetric and symmetric interactions, including fast targeting and navigation to all parts of a large display from the comfort of a desk and chair, as well as techniques that exploit the ability of the vision-based hand tracking system to provide multi-finger identification and full 2D hand segmentation. We also posit a design that allows for handling multiple concurrent users.
ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis BIBAFull-Text 53-61
  John D. Smith; Roel Vertegaal; Changuk Sohn
We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.

Projection

Moveable interactive projected displays using projector based tracking BIBAFull-Text 63-72
  Johnny C. Lee; Scott E. Hudson; Jay W. Summet; Paul H. Dietz
Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking technique. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.
Zoom-and-pick: facilitating visual zooming and precision pointing with interactive handheld projectors BIBAFull-Text 73-82
  Clifton Forlines; Ravin Balakrishnan; Paul Beardsley; Jeroen van Baar; Ramesh Raskar
Designing interfaces for interactive handheld projectors is an exiting new area of research that is currently limited by two problems: hand jitter resulting in poor input control, and possible reduction of image resolution due to the needs of image stabilization and warping algorithms. We present the design and evaluation of a new interaction technique, called zoom-and-pick, that addresses both problems by allowing the user to fluidly zoom in on areas of interest and make accurate target selections. Subtle design features of zoom-and-pick enable pixel-accurate pointing, which is not possible in most freehand interaction techniques. Our evaluation results indicate that zoom-and-pick is significantly more accurate than the standard pointing technique described in our previous work.
PlayAnywhere: a compact interactive tabletop projection-vision system BIBAFull-Text 83-92
  Andrew D. Wilson
We introduce PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor. PlayAnywhere's configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-projected vision-based table systems, including a shadow-based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-vision table systems, the ability to continuously track sheets of paper, and an optical flow-based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms.

Mobile interfaces

Sensing and visualizing spatial relations of mobile devices BIBAFull-Text 93-102
  Gerd Kortuem; Christian Kray; Hans Gellersen
Location information can be used to enhance interaction with mobile devices. While many location systems require instrumentation of the environment, we present a system that allows devices to measure their spatial relations in a true peer-to-peer fashion. The system is based on custom sensor hardware implemented as USB dongle, and computes spatial relations in real-time. In extension of this system we propose a set of spatialized widgets for incorporation of spatial relations in the user interface. The use of these widgets is illustrated in a number of applications, showing how spatial relations can be employed to support and streamline interaction with mobile devices.
eyeLook: using attention to facilitate mobile media consumption BIBAFull-Text 103-106
  Connor Dickie; Roel Vertegaal; Changuk Sohn; Daniel Cheng
One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.
Circle & identify: interactivity-augmented object recognition for handheld devices BIBAFull-Text 107-110
  Byungkon Sohn; Geehyuk Lee
The first requirement of a "spatial mouse" is the ability to identify the object that it is aiming at. Among many possible technologies that can be employed for this purpose, possibly the best solution would be object recognition by machine vision. The problem, however, is that object recognition algorithms are not yet reliable enough or light enough for hand-held devices. This paper demonstrates that a simple object recognition algorithm can become a practical solution when augmented by interactivity. The user draw a circle around a target using a spatial mouse, and the mouse captures a series of camera frames. The frames can be easily stitched together to give a target image separated from the background, with which we need only additional steps of feature extraction and object classification. We present here results from two experiments with a few household objects.
Supporting interaction in augmented reality in the presence of uncertain spatial knowledge BIBAFull-Text 111-114
  Enylton Machado Coelho; Blair MacIntyre; Simon J. Julier
A significant problem encountered when building Augmented Reality (AR) systems is that all spatial knowledge about the world has uncertainty associated with it. This uncertainty manifests itself as registration errors between the graphics and the physical world, and ambiguity in user interaction. In this paper, we show how estimates of the registration error can be leveraged to support predictable selection in the presence of uncertain 3D knowledge. These ideas are demonstrated in osgAR, an extension to OpenSceneGraph with explicit support for uncertainty in the 3D transformations. The osgAR runtime propagates this uncertainty throughout the scene graph to compute robust estimates of the probable location of all entities in the system from the user's viewpoint, in real-time. We discuss the implementation of selection in osgAR, and the issues that must be addressed when creating interaction techniques in such a system.

Touch

Low-cost multi-touch sensing through frustrated total internal reflection BIBAFull-Text 115-118
  Jefferson Y. Han
This paper describes a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces based on frustrated total internal reflection. We review previous applications of this phenomenon to sensing, provide implementation details, discuss results from our initial prototype, and outline future directions.
DTLens: multi-user tabletop spatial data exploration BIBAFull-Text 119-122
  Clifton Forlines; Chia Shen
Supporting groups of individuals exploring large maps and design diagrams on interactive tabletops is still an open research problem. Today's geospatial, mechanical engineering and CAD design applications are mostly single-user, keyboard and mouse-based desktop applications. In this paper, we present the design of and experience with DTLens, a new zoom-in-context, multi-user, two-handed, multi-lens interaction technique that enables group exploration of spatial data with multiple individual lenses on the same direct-touch interactive tabletop. DTLens provides a set of consistent interactions on lens operations, thus minimizes tool switching by users during spatial data exploration.

Mouse taming

Bimanual and unimanual image alignment: an evaluation of mouse-based techniques BIBAFull-Text 123-131
  Celine Latulipe; Craig S. Kaplan; Charles L. A. Clarke
We present an evaluation of three mouse-based techniques for aligning digital images. We investigate the physical image alignment task and discuss the implications for interacting with virtual images. In a formal evaluation we show that a symmetric bimanual technique outperforms an asymmetric bimanual technique which in turn outperforms a unimanual technique. We show that even after mode switching times are removed, the symmetric technique outperforms the single mouse technique. Subjects also exhibited more parallel interaction using the symmetric technique than when using the asymmetric technique.
Predictive interaction using the delphian desktop BIBAFull-Text 133-141
  Takeshi Asano; Ehud Sharlin; Yoshifumi Kitamura; Kazuki Takashima; Fumio Kishino
This paper details the design and evaluation of the Delphian Desktop, a mechanism for online spatial prediction of cursor movements in a Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based interfaces often becomes a spatially challenging task when the physical interaction mediators are the common mouse and a high resolution, physically large display screen. These spatial challenges are especially evident in overly crowded Windows desktops. The Delphian Desktop integrates simple yet effective predictive spatial tracking and selection paradigms into ordinary WIMP environments in order to simplify and ease pointing tasks. Predictions are calculated by tracking cursor movements and estimating spatial intentions using a computationally inexpensive online algorithm based on estimating the movement direction and peak velocity. In testing the Delphian Desktop effectively shortened pointing time to faraway icons, and reduced the overall physical distance the mouse (and user hand) had to mechanically traverse.
Zliding: fluid zooming and sliding for high precision parameter manipulation BIBAFull-Text 143-152
  Gonzalo Ramos; Ravin Balakrishnan
High precision parameter manipulation tasks typically require adjustment of the scale of manipulation in addition to the parameter itself. This paper introduces the notion of Zoom Sliding, or Zliding, for fluid integrated manipulation of scale (zooming) via pressure input while parameter manipulation within that scale is achieved via x-y cursor movement (sliding). We also present the Zlider (Figure 1), a widget that instantiates the Zliding concept. We experimentally evaluate three different input techniques for use with the Zlider in conjunction with a stylus for x-y cursor positioning, in a high accuracy zoom and select task. Our results marginally favor the stylus with integrated isometric pressure sensing tip over bimanual techniques which separate zooming and sliding controls over the two hands. We discuss the implications of our results and present further designs that make use of Zliding.

Customization 1

Automatic image retargeting with fisheye-view warping BIBAFull-Text 153-162
  Feng Liu; Michael Gleicher
Image retargeting is the problem of adapting images for display on devices different than originally intended. This paper presents a method for adapting large images, such as those taken with a digital camera, for a small display, such as a cellular telephone. The method uses a non-linear fisheye-view warp that emphasizes parts of an image while shrinking others. Like previous methods, fisheye-view warping uses image information, such as low-level salience and high-level object recognition to find important regions of the source image. However, unlike prior approaches, a non-linear image warping function emphasizes the important aspects of the image while retaining the surrounding context. The method has advantages in preserving information content, alerting the viewer to missing information and providing robustness.
Automation and customization of rendered web pages BIBAFull-Text 163-172
  Michael Bolin; Matthew Webber; Philip Rha; Tom Wilson; Robert C. Miller
On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.
Preference elicitation for interface optimization BIBAFull-Text 173-182
  Krzysztof Gajos; Daniel S. Weld
Decision-theoretic optimization is becoming a popular tool in the user interface community, but creating accurate cost (or utility) functions has become a bottleneck -- in most cases the numerous parameters of these functions are chosen manually, which is a tedious and error-prone process. This paper describes ARNAULD, a general interactive tool for eliciting user preferences concerning concrete outcomes and using this feedback to automatically learn a factored cost function. We empirically evaluate our machine learning algorithm and two automatic query generation approaches and report on an informal user study.

Cool stuff

Mediating photo collage authoring BIBAFull-Text 183-186
  Nicholas Diakopoulos; Irfan Essa
The medium of collage supports the visualization of meaningful event summaries using photographs. It can however be rather tedious to author a collage from a large collection of photographs. In this work we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level. Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm. The user exercises flexibility in the authoring process by (a) maintaining high-level control through the types of constraints applied and (b) leveraging visual emphases supported by the layout algorithm. The user can of course provide fine-grained control of the final collage through direct manipulation. Off-loading the tedium of collage construction to a user controlled yet automated process clears the way for rapidly generating different views of the same album and could also support the increased sharing of digital photos in the form of compact collages.
Dial and see: tackling the voice menu navigation problem with cross-device user experience integration BIBAFull-Text 187-190
  Min Yin; Shumin Zhai
IVR (interactive voice response) menu navigation has long been recognized as a frustrating interaction experience. We propose an IM-based system that sends a coordinated visual IVR menu to the caller's computer screen. The visual menu is updated in real time in response to the caller's actions. With this automatically opened supplementary channel, callers can take advantages of different modalities over different devices and interact with the IVR system with the ease of graphical menu selection. Our approach of utilizing existing network infrastructure to pinpoint the caller's virtual location and coordinating multiple devices and multiple channels based on users' ID registration can also be more generally applied to create integrated user experiences across a group of devices.
DocWizards: a system for authoring follow-me documentation wizards BIBAFull-Text 191-200
  Lawrence Bergman; Vittorio Castelli; Tessa Lau; Daniel Oblinger
Traditional documentation for computer-based procedures is difficult to use: readers have trouble navigating long complex instructions, have trouble mapping from the text to display widgets, and waste time performing repetitive procedures. We propose a new class of improved documentation that we call follow-me documentation wizards. Follow-me documentation wizards step a user through a script representation of a procedure by highlighting portions of the text, as well application UI elements. This paper presents algorithms for automatically capturing follow-me documentation wizards by demonstration, through observing experts performing the procedure. We also present our DocWizards implementation on the Eclipse platform. We evaluate our system with an initial user study that showing that most users have a marked preference for this form of guidance over traditional documentation.

Customization 2

Artistic resizing: a technique for rich scale-sensitive vector graphics BIBAFull-Text 201-210
  Pierre Dragicevic; Stephane Chatty; David Thevenin; Jean-Luc Vinot
When involved in the visual design of graphical user interfaces, graphic designers can do more than providing static graphics for programmers to incorporate into applications. We describe a technique that allows them to provide examples of graphical objects at various key sizes using their usual drawing tool, then let the system interpolate their resizing behavior. We relate this technique to current practices of graphic designers, provide examples of its use and describe the underlying inference algorithm. We show how the mathematical properties of the algorithm allows the system to be predictable and explain how it can be combined with more traditional layout mechanisms.
A1: end-user programming for web-based system administration BIBAFull-Text 211-220
  Eser Kandogan; Eben Haber; Rob Barrett; Allen Cypher; Paul Maglio; Haixia Zhao
System administrators work with many different tools to manage and fix complex hardware and software infrastructure in a rapidly paced work environment. Through extensive field studies, we observed that they often build and share custom tools for specific tasks that are not supported by vendor tools. Recent trends toward web-based management consoles offer many advantages but put an extra burden on system administrators, as customization requires web programming, which is beyond the skills of many system administrators. To meet their needs, we developed A1, a spreadsheet-based environment with a task-specific system-administration language for quickly creating small tools or migrating existing scripts to run as web portlets. Using A1, system administrators can build spreadsheets to access remote and heterogeneous systems, gather and integrate status data, and orchestrate control of disparate systems in a uniform way. A preliminary user study showed that in just a few hours, system administrators can learn to use A1 to build relatively complex tools from scratch.
Informal prototyping of continuous graphical interactions by demonstration BIBAFull-Text 221-230
  Yang Li; James A. Landay
Informal prototyping tools have shown great potential in facilitating the early stage design of user interfaces. However, continuous interactions, an important constituent of highly interactive interfaces, have not been well supported by previous tools. These interactions give continuous visual feedback, such as geometric changes of a graphical object, in response to continuous user input, such as the movement of a mouse. We built Monet, a sketch-based tool for prototyping continuous interactions by demonstration. In Monet, designers can prototype continuous widgets and their states of interest using examples. They can also demonstrate compound behaviors involving multiple widgets by direct manipulation. Monet allows continuous interactions to be easily integrated with event-based, discrete interactions. Continuous widgets can be embedded into storyboards and their states can condition or trigger storyboard transitions. Monet achieves these features by employing continuous function approximation and statistical classification techniques, without using any domain specific knowledge or assuming any application semantics. Informal feedback showed that Monet is a promising approach to enabling more complete tool support for early stage UI design.

Physical interfaces

Physical embodiments for mobile communication agents BIBAFull-Text 231-240
  Stefan Marti; Chris Schmandt
This paper describes a physically embodied and animated user interface to an interactive call handling agent, consisting of a small wireless animatronic device in the form of a squirrel, bunny, or parrot. A software tool creates movement primitives, composes these primitives into complex behaviors, and triggers these behaviors dynamically at state changes in the conversational agent's finite state machine. Gaze and gestural cues from the animatronics alert both the user and co-located third parties of incoming phone calls, and data suggests that such alerting is less intrusive than conventional telephones.
PapierCraft: a command system for interactive paper BIBAFull-Text 241-244
  Chunyuan Liao; François Guimbretière; Ken Hinckley
Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements.
   To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.
DT controls: adding identity to physical interfaces BIBAFull-Text 245-252
  Paul H. Dietz; Bret Harsham; Clifton Forlines; Darren Leigh; William Yerazunis; Sam Shipman; Bent Schmidt-Nielsen; Kathy Ryall
In this paper, we show how traditional physical interface components such as switches, levers, knobs and touch screens can be easily modified to identify who is activating each control. This allows us to change the function performed by the control, and the sensory feedback provided by the control itself, dependent upon the user. An auditing function is also available that logs each user's actions. We describe a number of example usage scenarios for our technique, and present two sample implementations.
Supporting interspecies social awareness: using peripheral displays for distributed pack awareness BIBAFull-Text 253-258
  Demi Mankoff; Anind Dey; Jennifer Mankoff; Ken Mankoff
In interspecies households, it is common for the non homo sapien members to be isolated and ignored for many hours each day when humans are out of the house or working. For pack animals, such as canines, information about a pack member's extended pack interactions (outside of the nuclear household) could help to mitigate this social isolation. We have developed a Pack Activity Watch System: Allowing Broad Interspecies Love In Telecommunication with Internet-Enabled Sociability (PAWSABILITIES) for helping to support remote awareness of social activities. Our work focuses on canine companions, and includes, pawticipatory design, labradory tests, and canid camera monitoring.