HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 919293949596979899000102030405060708091011-1

Proceedings of the 2001 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 2001 ACM Symposium on User Interface and Software Technology
Editors:Elizabeth Mynatt
Location:Orlando, Florida, USA
Dates:2001-Nov-11 to 2001-Nov-14
Publisher:ACM
Standard No:ISBN 1-58113-438-X; ACM Order Number 429012; ACM DL: Table of Contents hcibib: UIST01
Papers:31
Pages:237
Links:Conference Home Page
  1. Papers: Tangible support for collaboration
  2. Papers: Off the wall
  3. Papers: Tangible support for collaboration
  4. Papers: Information visualization
  5. Papers: Managing user interaction
  6. Papers: On the move
  7. Papers: Expressive user interfaces
  8. Invited demonstrations
  9. Papers: 3D drawing
  10. Papers: Novel user input
  11. Papers: Tactile user interface

Papers: Tangible support for collaboration

The designers' outpost: a tangible interface for collaborative web site BIBAFull-Text 1-10
  Scott R. Klemmer; Mark W. Newman; Ryan Farrell; Mark Bilezikjian; James A. Landay
In our previous studies into web design, we found that pens, paper, walls, and tables were often used for explaining, developing, and communicating ideas during the early phases of design. These wall-scale paper-based design practices inspired The Designers' Outpost, a tangible user interface that combines the affordances of paper and large physical workspaces with the advantages of electronic media to support information design. With Outpost, users collaboratively author web site information architectures on an electronic whiteboard using physical media (Post-it notes and images), structuring and annotating that information with electronic pens. This interaction is enabled by a touch-sensitive SMART Board augmented with a robust computer vision system, employing a rear-mounted video camera for capturing movement and a front-mounted high-resolution camera for capturing ink. We conducted a participatory design study with fifteen professional web designers. The study validated that Outpost supports information architecture work practice, and led to our adding support for fluid transitions to other tools.
Connectables: dynamic coupling of displays for the flexible creation of shared workspaces BIBAFull-Text 11-20
  Peter Tandler; Thorsten Prante; Christian Muller-Tomfelde; Norbert Streitz; Ralf Steinmetz
We present the ConnecTable, a new mobile, networked and context-aware information appliance that provides affordances for pen-based individual and cooperative work as well as for the seamless transition between the two. In order to dynamically enlarge an interaction area for the purpose of shared use, a flexible coupling of displays has been realized that overcomes the restrictions of display sizes and borders. Two ConnecTable displays dynamically form a homogeneous display area when moved close to each other. The appropriate triggering signal comes from built-in sensors allowing users to temporally combine their individual displays to a larger shared one by a simple physical movement in space. Connected ConnecTables allow their users to work in parallel on an ad-hoc created shared workspace as well as exchanging information by simply shuffling objects from one display to the other. We discuss the user interface and related issues as well as the software architecture. We also present the physical realization of the ConnecTables.

Papers: Off the wall

Fluid interaction with high-resolution wall-size displays BIBAFull-Text 21-30
  François Guimbretière; Maureen Stone; Terry Winograd
This paper describes new interaction techniques for direct pen-based interaction on the Interactive Mural, a large (6'x3.5') high resolution (64 dpi) display. They have been tested in a digital brainstorming tool that has been used by groups of professional product designers. Our "interactive wall" metaphor for interaction has been guided by several goals: to support both free-hand sketching and high-resolution materials, such as images, 3D models and GUI application windows; to present a visual appearance that does not clutter the content with control devices; and to support fluid interaction, which minimizes the amount of attention demanded and interruption due to the mechanics of the interface. We have adapted and extended techniques that were developed for electronic whiteboards and generalized the use of the FlowMenu to execute a wide variety of actions in a single pen stroke, While these techniques were designed for a brainstorming tool, they are very general and can be used in a wide variety of application domains using interactive surfaces.

Papers: Tangible support for collaboration

Focus plus context screens: combining display technology with visualization techniques BIBAFull-Text 31-40
  Patrick Baudisch; Nathaniel Good; Paul Stewart
Computer users working with large visual documents, such as large layouts, blueprints, or maps perform tasks that require them to simultaneously access overview information while working on details. To avoid the need for zooming, users currently have to choose between using a sufficiently large screen or applying appropriate visualization techniques. Currently available hi-res "wall-size" screens, however, are cost-intensive, space-intensive, or both. Visualization techniques allow the user to more efficiently use the given screen space, but in exchange they either require the user to switch between multiple views or they introduce distortion. In this paper, we present a novel approach to simultaneously display focus and context information. Focus plus context screens consist of a hi-res display and a larger low-res display. Image content is displayed such that the scaling of the display content is preserved, while its resolution may vary according to which display region it is displayed in. Focus plus context screens are applicable to practically all tasks that currently use overviews or fisheye views, but unlike these visualization techniques, focus plus context screens provide a single, non-distorted view. We present a prototype that seamlessly integrates an LCD with a projection screen and demonstrate four applications that we have adapted so far.
Support for multitasking and background awareness using interactive peripheral displays BIBAFull-Text 41-50
  Blair MacIntyre; Elizabeth D. Mynatt; Stephen Voida; Klaus M. Hansen; Joe Tullio; Gregory M. Corso
In this paper, we describe Kimura, an augmented office environment to support common multitasking practices. Previous systems, such as Rooms, limit users by constraining the interaction to the desktop monitor. In Kimura, we leverage interactive projected peripheral displays to support the perusal, manipulation and awareness of background activities. Furthermore, each activity is represented by a montage comprised of images from current and past interaction on the desktop. These montages help remind the user of past actions, and serve as a springboard for ambient context-aware reminders and notifications.

Papers: Information visualization

Parallel bargrams for consumer-based information exploration and choice BIBAFull-Text 51-60
  Kent Wittenburg; Tom Lanning; Michael Heinrichs; Michael Stanton
In this paper we introduce multidimensional visualization and interaction techniques that are an extension to related work in parallel histograms and dynamic querying. Bargrams are, in effect, histograms whose bars have been tipped over and lined up end-to-end. We discuss affordances of parallel bargrams in the context of systems that support consumer-based information exploration and choice based on the attributes of the items in the choice set. Our tool called EZChooser has enabled a number of prototypes in such domains as Internet shopping, investment decisions, college choice, and so on, and a limited version has been deployed for car shopping. Evaluations of the techniques include an experiment indicating that trained users prefer EZChooser over static tables for choice tasks among sets of 50 items with 7-9 attributes.
A framework for unifying presentation space BIBAFull-Text 61-70
  M. S. T. Carpendale; Catherine Montagnese
Making effective use of the available display space has long been a fundamental issue in user interface design. We live in a time of rapid advances in available CPU power and memory. However, the common sizes of our computational display spaces have only minimally increased or in some cases, such as hand held devices, actually decreased. In addition, the size and scope of the information spaces we wish to explore are also expanding. Representing vast amounts of information on our relatively small screens has become increasingly problematic and has been associated with problems in navigation, interpretation and recognition. User interface research has proposed several differing presentation approaches to address these problems. These methods create displays that vary considerably, visually and algorithmically. We present a unified framework that provides a way of relating seemingly distinct methods, facilitating the inclusion of more than one presentation method in a single interface. Furthermore, it supports extrapolation between the presentation methods it describes. Of particular interest are the presentation possibilities that exist in the ranges between various distortion presentations, magnified insets and detail-in-context presentations, and between detail-in-context presentations and a full-zooming environment. This unified framework offers a geometric presentation library in which presentation variations are available independently of the mode of graphic representation. The intention is to promote the ease of exploration and experimentation into the use of varied presentation combinations.
PhotoMesa: a zoomable image browser using quantum treemaps and bubblemaps BIBAFull-Text 71-80
  Benjamin B. Bederson
PhotoMesa is a zoomable image browser that uses a novel treemap algorithm to present large numbers of images grouped by directory, or other available metadata. It uses a new interaction technique for zoomable user interfaces designed for novices and family use that makes it straightforward to navigate through the space of images, and impossible to get lost. PhotoMesa groups images using one of two new algorithms that lay out groups of objects in a 2D space-filling manner. Quantum treemaps are designed for laying out images or other objects of indivisible (quantum) size. They are a variation on existing treemap algorithms in that they guarantee that every generated rectangle will have a width and height that are an integral multiple of an input object size. Bubblemaps also fill space with groups of quantum-sized objects, but generate non-rectangular blobs, and utilize space more efficiently.

Papers: Managing user interaction

Outlier finding: focusing user attention on possible errors BIBAFull-Text 81-90
  Robert C. Miller; Brad A. Myers
When users handle large amounts of data, errors are hard to notice. Outlier finding is a new way to reduce errors by directing the user's attention to inconsistent data which may indicate errors. We have implemented an outlier finder for text, which can detect both unusual matches and unusual mismatches to a text pattern. When integrated into the user interface of a PBD text editor and tested in a user study, outlier finding substantially reduced errors.
A modular geometric constraint solver for user interface applications BIBAFull-Text 91-100
  Hiroshi Hosobe
Constraints have been playing an important role in the user interface field since its infancy. A prime use of constraints in this field is to automatically maintain geometric layouts of graphical objects. To facilitate the construction of constraint-based user interface applications, researchers have proposed various constraint satisfaction methods and constraint solvers. Most previous research has focused on either local propagation or linear constraints, excluding more general nonlinear ones. However, nonlinear geometric constraints are practically useful to various user interfaces, e.g., drawing editors and information visualization systems. In this paper, we propose a novel constraint solver called Chorus, which realizes various powerful nonlinear geometric constraints such as Euclidean geometric, non-overlapping, and graph layout constraints. A key feature of Chorus is its module mechanism that allows users to define new kinds of geometric constraints. Also, Chorus supports "soft" constraints with hierarchical strengths or preferences (i.e., constraint hierarchies). We describe its framework, algorithm, implementation, and experimental results.
View management for virtual and augmented reality BIBAFull-Text 101-110
  Blaine Bell; Steven Feiner; Tobias Hollerer
We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible. We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.

Papers: On the move

LetterWise: prefix-based disambiguation for mobile text input BIBAFull-Text 111-120
  I. Scott MacKenzie; Hedy Kober; Derek Smith; Terry Jones; Eugene Skepner
A new technique to enter text using a mobile phone keypad is described. For text input, the traditional touchtone phone keypad is ambiguous because each key encodes three or four letters. Instead of using a stored dictionary to guess the intended word, our technique uses probabilities of letter sequences -- "prefixes" -- to guess the intended letter. Compared to dictionary-based methods, this technique, called LetterWise, takes significantly less memory and allows entry of non-dictionary words without switching to a special input mode. We conducted a longitudinal study to compare LetterWise to Multitap, the conventional text entry method for mobile phones. The experiment included 20 participants (10 LetterWise, 10 Multitap), and each entered phrases of text for 20 sessions of about 30 minutes each. Error rates were similar between the techniques; however, by the end of the experiment the mean entry speed was 36% faster with LetterWise than with Multitap.
From desktop to phonetop: a UI for web interaction on very small devices BIBAFull-Text 121-130
  Jonathan Trevor; David M. Hilbert; Bill N. Schilit; Tzu Khiau Koh
While it is generally accepted that new Internet terminals should leverage the installed base of Web content and services, the differences between desktop computers and very small devices makes this challenging. Indeed, the browser interaction model has evolved on desktop computers having a unique combination of user interface (large display, keyboard, pointing device), hardware, and networking capabilities. In contrast, Internet enabled cell phones, typically with 3-10 lines of text, sacrifice usability as Web terminals in favor of portability and other functions. Based on our earlier experiences building and using a Web browser for small devices we propose a new UI that splits apart the integrated activities of link following and reading into separate modes: navigating to; and acting on web content. This interaction technique for very small devices is both simpler for navigating and allows users to do more than just read. The M-Links system incorporates modal browsing interaction and addresses a number of associated problems. We have built our system with an emphasis on simplicity and user extensibility and describe the design, implementation and evolution of the user interface.
Join and capture: a model for nomadic interaction BIBAFull-Text 131-140
  Dan R., Jr. Olsen; S. Travis Nielsen; David Parslow
The XWeb architecture delivers interfaces to a wide variety of interactive platforms. XWeb's SUBSCRIBE mechanism allows multiple interactive clients to synchronize with each other. We define the concept of Join as the mechanism for acquiring access to a service's interface. Join also allows the formation of spontaneous collaborations with other people. We define the concept of Capture as the means for users to assemble suites of interactive resources to apply to a particular problem. These mechanisms allow users to access devices that they encounter in their environment rather than carrying all their devices with them. We describe two prototype implementations of Join and Capture. One uses a Java ring to carry a user's identification and to make connections. The other uses a set of cameras to watch where users are and what they touch. Lastly we present algorithms for resolving conflicts generated when independent interactive clients manipulate the same information.

Papers: Expressive user interfaces

Aesthetic information collages: generating decorative displays that contain information BIBAFull-Text 141-150
  James Fogarty; Jodi Forlizzi; Scott E. Hudson
Normally, the primary purpose of an information display is to convey information. If information displays can be aesthetically interesting, that might be an added bonus. This paper considers an experiment in reversing this imperative. It describes the Kandinsky system which is designed to create displays which are first aesthetically interesting, and then as an added bonus, able to convey information. The Kandinsky system works on the basis of aesthetic properties specified by an artist (in a visual form). It then explores a space of collages composed from information bearing images, using an optimization technique to find compositions which best maintain the properties of the artist's aesthetic expression.
Cursive: a novel interaction technique for controlling expressive avatar gesture BIBAFull-Text 151-152
  Francesca Barrientos; John Canny
We are developing an interaction technique for rich nonverbal communication through an avatar. By writing a single letter on a pen tablet device, a user can express their ideas or intentions, non-verbally, using their avatar body. Our system solves the difficult problem of controlling the movements of a highly articulated, 3D avatar model using a common input device within the context of an office environment. We believe that writing is a richly expressive and natural means for controlling expressive avatar gesture.
Novel interaction techniques for overlapping windows BIBAFull-Text 153-154
  Michel Beaudouin-Lafon
This note presents several techniques to improve window management with overlapping windows: tabbed windows, turning and peeling back windows, and snapping and zipping windows.
Voice as sound: using non-verbal voice input for interactive control BIBAFull-Text 155-156
  Takeo Igarashi; John F. Hughes
We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.

Invited demonstrations

TSI (teething ring sound instrument): a design of the sound instrument for the baby BIBAFull-Text 157-158
  Naoko Kubo; Kazuhiro Jo; Ken Matsunaga
In this paper, we will describe the TSI (Teething ring Sound Instrument), a new sound instrument given to babies, which consists of a teething ring, a knob, an I-CubeX Digitizer [1] and a computer which processes MIDI messages. The TSI is designed to bring music experience to baby with the movement of the babies reflex sucking motion. We provided the TSI to a baby and observed her action to the TSI and her reaction to the generated sound. This experiment showed the high potential of the TSI.
Tools for expressive text-to-speech markup BIBAFull-Text 159-160
  Erik Blankinship; Richard Beckwith
This paper describes handicapped accessible text-to-speech markup software developed for poetry and performance. Most text-to-speech software allows the user to select a voice, but provides no control over performance parameters such as rate, volume, and pitch. For users with vocal disabilities, the default "computer voice" is often dreaded since it provides no personalization. Evolving standards exist for text-to-speech markup (Sable, Java Speech Markup Language, Spoken Text Markup Language), but few tools exist for non-experts to modify documents using these prosody options [1, 5]. Furthermore, we could find fewer tools allowing for straightforward live performance using a synthesized voice [3]. Thus we created an easy to learn text-to-speech markup tool that requires little training to use.
Conducting a realistic electronic orchestra BIBAFull-Text 161-162
  Jan O. Borchers; Wolfgang Samminger; Max Muhlhauser
Personal Orchestra is the first system to let users conduct an actual audio and video recording of an orchestra, using an infrared baton to control tempo, volume, and instrument sections. A gesture recognition algorithm interprets user input, and a novel high-fidelity playback algorithm renders audio and video data at variable speed without time-stretching artifacts. The system is installed as a public exhibit in the HOUSE OF MUSIC VIENNA.

Papers: 3D drawing

Simplicial families of drawings BIBAFull-Text 163-172
  Lucas Kovar; Michael Gleicher
In this paper we present a method for helping artists make artwork more accessible to casual users. We focus on the specific case of drawings, showing how a small number of drawings can be transformed into a richer object containing an entire family of similar drawings. This object is represented as a simplicial complex approximating a set of valid interpolations in configuration space. The artist does not interact directly with the simplicial complex. Instead, she guides its construction by answering a specially chosen set of yes/no questions. By combining the flexibility of a simplicial complex with direct human guidance, we are able to represent very general constraints on membership in a family. The constructed simplicial complex supports a variety of algorithms useful to an end user, including random sampling of the space of drawings, constrained interpolation between drawings, projection of another drawing into the family, and interactive exploration of the family.
A suggestive interface for 3D drawing BIBAFull-Text 173-181
  Takeo Igarashi; John F. Hughes
This paper introduces a new type of interface for 3D drawings that improves the usability of gestural interfaces and augments typical command-based modeling systems. In our suggestive interface, the user gives hints about a desired operation to the system by highlighting related geometric components in the scene. The system then infers possible operations based on the hints and presents the results of these operations as small thumbnails. The user completes the editing operation simply by clicking on the desired thumbnail. The hinting mechanism lets the user specify geometric relations among graphical components in the scene, and the multiple thumbnail suggestions make it possible to define many operations with relatively few distinct hint patterns. The suggestive interface system is implemented as a set of suggestion engines working in parallel, and is easily extended by adding customized engines. Our prototype 3D drawing system, Chateau, shows that a suggestive interface can effectively support construction of various 3D drawings.

Papers: Novel user input

Empirical measurements of intrabody communication performance under varied physical configurations BIBAFull-Text 183-190
  Kurt Partridge; Bradley Dahlquist; Alireza Veiseh; Annie Cain; Ann Foreman; Joseph Goldberg; Gaetano Borriello
Intrabody communication (IBC) is a wireless communications technology that uses a person's body as the transmission medium for imperceptible electrical signals. Because communication is limited to the vicinity of a person's body, ambiguities arising from communication between personal devices and environmental devices when multiple people are present can, in theory, be solved simply. Intrabody communication also potentially allows data to be transferred when a person touches an IBC-enabled device. We have designed and constructed an intrabody communication system, modeled after Zimmerman's original design, and extended it to operate up to 38.4Kbps and to calculate signal strength. In this paper, we present quantitative measurements of data error rates and signal strength while varying hand distance to transceiver plate, electrode location on the body, touch plate size and shape, and several other factors. We find that plate size and shape have only minor effects, but that the distance to plate and the coupling mechanism significantly effect signal strength. We also find that portable devices, with poor ground coupling, suffer more significant signal attenuation. Our goal is to promote design guidelines for this technology and identify the best contexts for its effective deployment.
Toward more sensitive mobile phones BIBAFull-Text 191-192
  Ken Hinckley; Eric Horvitz
Although cell phones are extremely useful, they can be annoying and distracting to owners and others nearby. We describe sensing techniques intended to help make mobile phones more polite and less distracting. For example, our phone's ringing quiets as soon as the user responds to an incoming call, and the ring mutes if the user glances at the caller ID and decides not to answer. We also eliminate the need to press a TALK button to answer an incoming call by recognizing if the user picks up the phone and listens to it.
Real-time audio buffering for telephone applications BIBAFull-Text 193-194
  Paul H. Dietz; William S. Yerazunis
A system that uses an ear proximity sensor to actively manage periods of distraction during telephone conversations is described. We detect when the phone is removed from the ear, record any incoming audio, and play it back when the phone is returned to the ear. By dropping silent intervals and speeding up playback with a pitch-preserving algorithm, we quickly return to real-time without the loss of information. This real-time audio buffering technique also allows us to create a user-activated, lossless instant replay function.
Pop through mouse button interactions BIBAFull-Text 195-196
  Robert Zeleznik; Timothy Miller; Andrew Forsberg
We present a range of novel interactions enabled by a simple modification in the design of a computer mouse. By converting each mouse button to pop through tactile push-buttons, similar to the focus/shutter-release buttons used in many cameras, users can feel, and the computer can sense, two distinct "clicks" corresponding to pressing lightly and pressing firmly to pop through. Despite the prototypical status of our hardware and software implementations, our current pop through mouse interactions are compelling and warrant further investigation. In particular, we demonstrate that pop through buttons not only yield an additional button activation state that is composable with, or even preferable to, techniques such as double-clicking, but also can endow a qualitatively novel user experience when meaningfully and consistently applied. We propose a number of software guidelines that may provide a consistent, systemic benefit; for example, light pressure may invoke default interaction (short menu), and firm pressure may supply more detail (long menu).
Guided gesture support in the paper PDA BIBAFull-Text 197-198
  Daniel Avrahami; Scott E. Hudson; Thomas P. Moran; Brian D. Williams
Ordinary paper offers properties of readability, fluidity, flexibility, cost, and portability that current electronic devices are often hard pressed to match. In fact, a lofty goal for many interactive systems is to be "as easy to use as pencil and paper". However, the static nature of paper does not support a number of capabilities, such as search and hyperlinking that an electronic device can provide. The Paper PDA project explores ways in which hybrid paper electronic interfaces can bring some of the capabilities of the electronic medium to interactions occurring on real paper. Key to this effort is the invention of on-paper interaction techniques which retain the flexibility and fluidity of normal pen and paper, but which are structured enough to allow robust interpretation and processing in the digital world. This paper considers the design of a class of simple printed templates that allow users to make common marks in a fluid fashion, and allow additional gestures to be invented by the users to meet their needs, but at the same time encourages marks that are quite easy to recognize.

Papers: Tactile user interface

Haptic techniques for media control BIBAFull-Text 199-208
  Scott S. Snibbe; Karon E. MacLean; Rob Shaw; Jayne Roderick; William L. Verplank; Mark Scheeff
We introduce a set of techniques for haptically manipulating digital media such as video, audio, voicemail and computer graphics, utilizing virtual mediating dynamic models based on intuitive physical metaphors. For example, a video sequence can be modeled by linking its motion to a heavy spinning virtual wheel: the user browses by grasping a physical force-feedback knob and engaging the virtual wheel through a simulated clutch to spin or brake it, while feeling the passage of individual frames. These systems were implemented on a collection of single axis actuated displays (knobs and sliders), equipped with orthogonal force sensing to enhance their expressive potential. We demonstrate how continuous interaction through a haptically actuated device rather than discrete button and key presses can produce simple yet powerful tools that leverage physical intuition.
Phidgets: easy development of physical interfaces through physical widgets BIBAFull-Text 209-218
  Saul Greenberg; Chester Fitchett
Physical widgets or phidgets are to physical user interfaces what widgets are to graphical user interfaces. Similar to widgets, phidgets abstract and package input and output devices: they hide implementation and construction details, they expose functionality through a well-defined API, and they have an (optional) on-screen interactive interface for displaying and controlling device state. Unlike widgets, phidgets also require: a connection manager to track how devices appear on-line; a way to link a software phidget with its physical counterpart; and a simulation mode to allow the programmer to develop, debug and test a physical interface even when no physical device is present. Our evaluation shows that everyday programmers using phidgets can rapidly develop physical interfaces.
DiamondTouch: a multi-user touch technology BIBAFull-Text 219-226
  Paul Dietz; Darren Leigh
A technique for creating a touch-sensitive input device is proposed which allows multiple, simultaneous users to interact in an intuitive fashion. Touch location information is determined independently for each user, allowing each touch on a common surface to be associated with a particular user. The surface generates location dependent, modulated electric fields which are capacitively coupled through the users to receivers installed in the work environment. We describe the design of these systems and their applications. Finally, we present results we have obtained with a small prototype device.