HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 02030405060708091011-111-212-112-213-113-214-114-215-115-2

Proceedings of the 2011 ACM Symposium on User Interface Software and Technology

Fullname:Adjunct Proceedings of the 24th Annual ACM Symposium on User Interface and Software Technology
Editors:Jeff Pierce; Maneesh Agrawala; Scott Klemmer
Location:Santa Barbara, California
Dates:2011-Oct-16 to 2011-Oct-19
Volume:2
Publisher:ACM
Standard No:ISBN 1-4503-1014-1, 978-1-4503-1014-7; ACM DL: Table of Contents hcibib: UIST11-2
Papers:39
Pages:94
Links:Conference Home Page
  1. UIST 2011-10-16 Volume 2
    1. Demonstration
    2. Doctoral symposium
    3. Poster presentation

UIST 2011-10-16 Volume 2

Demonstration

PicoPet: "Real World" digital pet on a handheld projector BIBAFull-Text 1-2
  Yuhang Zhao; Chao Xue; Xiang Cao; Yuanchun Shi
We created PicoPet, a digital pet game based on mobile handheld projectors. The player can project the pet into physical environments, and the pet behaves and evolves differently according to the physical surroundings. PicoPet creates a new form of gaming experience that is directly blended into the physical world, thus could become incorporated into the player's daily life as well as reflecting their lifestyle. Multiple pets projected by multiple players can also interact with each other, potentially triggering social interactions between players. In this paper, we present the design and implementation of PicoPet, as well as directions for future explorations.
Scopemate: a tracking inspection microscope BIBAFull-Text 3-4
  A Cati N. Boulanger (Vaucelle); Paul Dietz; Steven Bathiche
We propose a new interaction mechanism for inspection microscopy. The novel input device combines an optically augmented webcam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with appropriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives.
SUAVE: sensor-based user-aware viewing enhancement for mobile device displays BIBAFull-Text 5-6
  Robert LiKamWa; Lin Zhong
As mobile devices are used in various environments, ambient light and wide viewing direction impair a display's perceived display quality. To combat these effects, we introduce SUAVE, our Sensor-based User-Aware Viewing Enhancement system. SUAVE senses the ambient light and viewing direction and applies corresponding image enhancements to the display content, increasing its usability. SUAVE employs a parameter calibration process to help users select suitable image enhancements for particular viewing contexts. We report implementations of SUAVE on a Motorola Xoom Tablet and an Apple iPhone 4.
STIMTAC: a tactile input device with programmable friction BIBAFull-Text 7-8
  Michel Amberg; Frédéric Giraud; Betty Semail; Paolo Olivo; Géry Casiez; Nicolas Roussel
We present the STIMTAC, a touchpad device that supports friction reduction. Contrary to traditional vibrotactile approaches, the STIMTAC provides information passively, acting as a texture display. It does not transfer energy to the user but modifies how energy is dissipated within the contact area by a user-initiated friction process. We report on the iterative process that led to the current hardware design and briefly describe the software framework that we are developing to illustrate its potential.
Gesture keyboard requiring only one camera BIBAFull-Text 9-10
  Taichi Murase; Atsunori Moteki; Noriaki Ozawa; Nobuyuki Hara; Takehiro Nakai; Katsuhito Fujimoto
In this paper, we propose a novel gesture-based virtual keyboard (Gesture Keyboard) of QWERTY key layout requiring only one camera. Gesture Keyboard tracks the user's fingers and recognizes gestures as the input, and each virtual key of it follows a corresponding finger. Therefore, it is possible to input characters at the user's preferred hand position even if displacing hands during inputting. Because Gesture Keyboard requires only one camera to obtain sensor information, keyboard-less devices can feature it easily.
Digital taste interface BIBAFull-Text 11-12
  Nimesha Ranasinghe; Adrian David Cheok; Hideaki Nii; Owen Noel Newton Fernando; Gopalakrishnakone Ponnampalam
Thus far, most of the systems for generating taste sensations are based on blending several chemicals respectively, and there were no definite strategies to stimulate the sense of taste digitally. In this paper, a method for digitally actuating the sense of taste is introduced by actuating the tongue through electrical and thermal stimulations. The digital taste interface, a control system, is developed to stimulate the taste sensations digitally on the tongue. The effect of most persuading factors such as current, frequency, and temperature have been accounted for stimulating the tongue non-invasively. The experimental results suggested that sourness and saltiness are the main sensations that could be evoked while there are evidences of sweet and bitter sensations too.
MAI painting brush++: augmenting the feeling of painting with new visual and tactile feedback mechanisms BIBAFull-Text 13-14
  Kenji Sugihara; Mai Otsuki; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura
We have developed a mixed-reality (MR) painting system named the MR-based Artistic Interactive (MAI) Painting Expert and MAI Painting Brush which simulates the painting of physical objects in the real world. In this paper, we describe how the MAI Painting Brush was upgraded to the "MAI Painting Brush++," enabling virtual painting on virtual objects. The improved system has a visual and tactile feedback mechanism that simulates the effect of touch when used on a virtual painting target. This is achieved using deformation of the brush tip and reaction force on the hand.
ThickPad: a hover-tracking touchpad for a laptop BIBAFull-Text 15-16
  Sangwon Choi; Jaehyun Han; Sunjun Kim; Seongkook Heo; Geehyuk Lee
We explored the use of a hover tracking touchpad in a laptop environment. In order to study the new experience, we implemented a prototype touchpad consisting of infrared LEDs and photo-transistors, which can track fingers as far as 10mm over the surface. We demonstrate here three major interaction techniques that would become possible when a hover-tracking touchpad meets a laptop.
TactileTape: low-cost touch sensing on curved surfaces BIBAFull-Text 17-18
  David Holman; Roel Vertegaal
TactileTape is a one-dimensional touch sensor that looks and behaves like regular tape. It can be constructed from everyday materials (a pencil, tin foil, and shelf liner) and senses single-touch input on curved and deformable surfaces. It is used as a roll of touch sensitive material from which designers cut pieces to quickly add touch sensitive strips to physical prototypes. TactileTape is low-cost, easy to interface, and, unlike current non-planar touch solutions [2,7,11], it is better adapted for the rapid exploration and iteration in the early design stage.
Recognizing currency bills using a mobile phone: an assistive aid for the visually impaired BIBAFull-Text 19-20
  Nektarios Paisios; Alex Rubinsteyn; Vrutti Vyas; Lakshminarayanan Subramanian
Despite the rapidly increasing use of credit cards and other electronic forms of payment, cash is still widely used for everyday transactions due to its convenience, perceived security and anonymity. However, the visually impaired might have a hard time telling each paper bill apart, since, for example, all dollar bills have the exact same size and, in general, currency bills around the world are not distinguishable by any tactile markings. We propose the use of a broadly available tool, the camera of a smart-phone, and an adaptation of the SIFT algorithm to recognize partial and even distorted images of paper bills. Our algorithm improves memory efficiency and the speed of SIFT key-point classification by using a k-means clustering approach. Our results show that our system can be used in real-world scenarios to recognize unknown bills with a high accuracy.
Redprint: integrating API specific "instant example" and "instant documentation" display interface in IDEs BIBAFull-Text 21-22
  Anant P. Bhardwaj; Dave Luciano; Scott R. Klemmer
Software libraries for most of the modern programming languages are numerous, large and complex. Remembering the syntax and usage of APIs is a difficult task for not just novices but also expert programmers. IDEs (Integrated Development Environment) provide capabilities like autocomplete and intellisense to assist programmers; however, programmers still need to visit search engines like Google to find API (Application Program Interface) documentation and samples. This paper evaluates Redprint -- a browser based development environment for PHP that integrates API specific "Instant Example" and "Instant Documentation" display interfaces. A comparative laboratory study shows that integrating API specific "Instant Example" and "Instant Documentation" display interfaces into a development environment significantly reduces the cost of searching and thus significantly reduces the time to develop software.

Doctoral symposium

Role-based interfaces for collaborative software development BIBAFull-Text 23-26
  Max Goldman
Real-time collaboration between multiple simultaneous contributors to a shared document is full of both opportunities and pitfalls, as evidenced by decades of research and industry work in computer-supported cooperative work. In the domain of software engineering, collaboration is still generally achieved either via shared use of a single computer (e.g. pair programming) or with version control (and manual pushing and pulling of changes). By examining and designing for the different roles collaborating programmers play when working synchronously together, we can build real-time collaborative programming systems that make their collaboration more effective. And beyond simple shared editing, we can provide asymmetric, role-specific interfaces on their shared task. Collabode is a web-based IDE for collaborative programming with simultaneous editors that, along with several novel models for closely-collaborative software development, explores the potential of real-time cooperative programming.
Using graphical representation of user interfaces as visual references BIBAFull-Text 27-30
  Tsung-Hsiang Chang
Many user interfaces use indirect references to identify specific objects and devices. My thesis investigates using graphical representations of user interfaces (i.e. screenshots) as direct visual references to support various kinds of applications. Sikuli Script enables users to programmatically control GUIs without the support from the underlying applications. Sikuli Test lets GUI developers and testers create test scripts without coding. Deep Shot introduces a framework and interaction techniques to migrate work states across heterogeneous devices in one action, taking a picture. In addition to these pure pixel-based systems, PAX associates the pixel representation with the internal structures and metadata of the user interface. Based on these building blocks, we propose to develop a visual history system that enables users to search and browse what they have seen on their computer screens. We outline some interesting use cases and discuss the challenges in this ongoing work.
Accessibility for individuals with color vision deficiency BIBAFull-Text 31-34
  David R. Flatla
Individuals with Color Vision Deficiency (CVD) are often unable to distinguish between colors that individuals without CVD can distinguish. Recoloring tools exist that modify the colors in an image so they are more easily distinguishable for those with CVD. These tools use models of color differentiation that rely on many assumptions about the environment and user. However, these assumptions rarely hold in real-world use cases, leading to incorrect color modification by recoloring tools. In this doctoral symposium, I will present Situation-Specific Models (SSMs) as a solution to this problem. SSMs are color differentiation models created in-situ via a calibration procedure. This calibration procedure captures the exact color differentiation abilities of the user, allowing a color differentiation model to be created that fits the user and his/her environmental situation. An SSM-based recoloring tool will be able to provide recolored images that most accurately reflect the color differentiation abilities of a particular individual in a particular environment.
Augmenting the SCOPE of interactions with implicit and explicit graphical structures BIBAFull-Text 35-38
  Raphaël Hoarau
When using interactive graphical tools, users often have to manage a structure, i.e. the arrangement of and relations between the parts or elements of the content. However, the interaction with structures may be complex, and not well integrated with the interaction with the content. Based on contextual inquiries and past work, we have identified a number of concepts and requirements about interaction with structure. We have explored a number of interactive tools and we present one of them in this paper: a new kind of property sheet that relies on the implicit structure of graphics. The interactions with the tool augment the scope of interactions to multiple objects.
Mobile multi-display environments BIBAFull-Text 39-42
  Jessica R. Cauchard
Mobile devices are increasingly being fitted with more than one display, presenting a new breed of Mobile Multi-Display Environments (MMDEs). It is however still unclear how the extra display fits within the mobile devices' ecology in terms of visualisation and interaction. My research explores the alignment between multiple displays in a mobile environment and how different alignments affect usability and the choice of a suitable interaction technique. In order to investigate those properties and adapt them to various use cases, I will build a steerable projection system to study different alignments, then analyse visual separation effects in MMDEs and finally explore the possibilities offered when the displays are overlapping.
Advanced interaction with mobile projection interfaces BIBAFull-Text 43-46
  Markus Löchtefeld
Through the increasing miniaturization of projection units the integration of such units in everyday-life objects is now possible. Even though these so called pico-projectors are already getting integrated into mobile devices like phones or digital cameras, comparably little research has been conducted to empower these devices to their full capabilities. I outline my previous and current work towards an interface design and a privacy framework that will facilitate mobile projection devices to be part in people's everyday-life. In particular my work is divided into two directions, on the one hand the development of a single-user scenario interface and on the other hand a framework to cope with privacy issues. This will allow the deeper exploitation of the capabilities of mobile projection units for a variety of everyday tasks.
Designing for effective end-user interaction with machine learning BIBAFull-Text 47-50
  Saleema Amershi
End-user interactive machine learning is a promising tool for enhancing human capabilities with large data. Recent work has shown that we can create end-user interactive machine learning systems for specific applications. However, we still lack a generalized understanding of how to design effective end-user interaction with interactive machine learning systems. My dissertation work aims to advance our understanding of this question by investigating new techniques that move beyond naïve or ad-hoc approaches and balance the needs of both end-users and machine learning algorithms. Although these explorations are grounded in specific applications, we endeavored to design strategies independent of application or domain specific features. As a result, our findings can inform future end-user interaction with machine learning systems.

Poster presentation

Dynamic ambient lighting for mobile devices BIBAFull-Text 51-52
  Qian Qin; Michael Rohs; Sven Kratz
The information a small mobile device can show via its display has been always limited by its size. In large information spaces, relevant information, such as important locations on a map can get clipped when a user starts zooming and panning. Dynamic ambient lighting allows mobile devices to visualize off-screen objects by illuminating the background without compromising valuable display space. The lighted spots can be used to show the direction and distance of such objects by varying the spot's position and intensity. Dynamic ambient lighting also provides a new way of displaying the state of a mobile device. Illumination is provided by a prototype rear of device shell which contains LEDs and requires the device to be placed on a surface, such as a table or desk.
Active bone-conducted sound sensing for wearable interfaces BIBAFull-Text 53-54
  Kentaro Takemura; Akihiro Ito; Jun Takamatsu; Tsukasa Ogasawara
In this paper, we propose a wearable sensor system that measures an angle of an elbow and position tapped by finger using bone-conducted sound. Our system consists of two microphones and a speaker, and they are attached on forearm. A novelty of this paper is to use active sensing for measuring an angle of an elbow. In this paper, active sensing means to emit sounds to a bone, and a microphone receives the sounds reflected at the elbow. The reflection of sound depends on the angle of elbow. Since frequencies of bone-conducted sound by tapping and from the speaker are different, these proposed techniques can be used simultaneously. We confirmed the feasibility of proposed system through experiments.
TOPS: television object promoting system BIBAFull-Text 55-56
  Tun-Hao You; Yi-Jui Wu; Yi-Jen Yeh
In this short paper, we propose the Television Object Promoting system (TOPS), a What You See Is What You Get (WYSIWYG) user interface and user experience designed for users to interact with objects in TV programs. Using TOPS while watching TV, consumers can acquire information about objects appearing in TV programs, such as merchandise, people, and scenic spots. Moreover, consumers can purchase merchandise directly, or can obtain services or items related to those objects. Besides, vendors are able to provide detail and selling information about the objects. The Television Object Promoting system (TOPS) offers not only convenience to consumers, but also new marketing methods to vendors. The paper also discusses the features, design, and implementation of TOPS.
Execution control for crowdsourcing BIBAFull-Text 57-58
  Daniel S. Weld; A Mausam; Peng Dai
Crowdsourcing marketplaces enable a wide range of applications, but constructing any new application is challenging -- usually requiring a complex, self-managing workflow in order to guarantee quality results. We report on the CLOWDER project, which uses machine learning to continually refine models of worker performance and task difficulty. We present decision-theoretic optimization techniques that can select the best parameters for a range of workflows. Initial experiments show our optimized workflows are significantly more economical than with manually set parameters.
Fit your hand: personalized user interface considering physical attributes of mobile device users BIBAFull-Text 59-60
  Hosub Lee; Young Sang Choi
We present a mobile user interface which dynamically reformulates the layout based on the touch input pattern of users. By analyzing the touch input, it infers users' physical characteristics such as handedness, finger length, or usage habits, thereby calculates the optimal touch area for the user. The user interface is gradually adapted to each user by automatically rearranging graphic objects such as application icons to the most easy-to-touch positions. To compute the optimal touch area, we designed software architecture and implemented an Android application which analyzes touch input and determines the touch frequency in specific screen areas, the handedness and hand size of users. As proof of concept, this research prototype shows acceptable performance and accuracy. To decide which items should be placed in the optimal touch area, we plan to integrate our machine learning algorithm which prioritizes applications according to the context of users into the proposed system.
Poke: emotional touch delivery through an inflatable surface over interpersonal mobile communications BIBAFull-Text 61-62
  Young-Woo Park; Sungjae Hwang; Tek-Jin Nam
In this paper we present Poke -- a soft and human-like remote touch technique through an inflatable surface. We aimed to design it for delivering more emotional and pleasant touches over interpersonal mobile communications. Poke enables to touch human's skin with an inflatable surface according to the other user's finger pressures and hand gestures during a phone call. It delivers different kinds of pokes and other affective touches with its inflating patterns (strengths and repetitions) and vibrations from the top of the inflatable surface. The paper also suggests affective touches such as weak/hard poke, poke and then shake, poke back and pat which can be exchanged during typical phone calls.
Scopemate: a robotic microscope BIBAFull-Text 63-64
  Cati Boulanger; Paul Dietz; Steven Bathiche
Scopemate is a robotic microscope that tracks the user for inspection microscopy. The novel input device combines an optically augmented webcam with a head tracker. A head tracker controls the inspection angle of a webcam fitted with appropriate microscope optics. This allows an operator the full use of their hands while intuitively looking at the work area from different perspectives.
Composition for conductor and audience: new uses for mobile devices in the concert hall BIBAFull-Text 65-66
  Charles Roberts; Tobias Hollerer
Composition for Conductor and Audience is an audience interaction piece first performed for an audience of over seventy-five people in June of 2011. The audience becomes the orchestra in this composition as they control different musical variables using the touchscreen surfaces on their personal mobile devices. To the authors' knowledge this is the first concert piece for bi-directional networked interactivity on audience-owned mobile devices to ever be performed. Audience members participated using the iOS / Android application 'Control', a generic solution for creating touchscreen interfaces written by the first author. Over twenty members of the audience participated in the performance, using their personal devices to match gestures made by the conductor with corresponding gestures on their mobile devices.
MARBLS: a visual environment for building clinical alert rules BIBAFull-Text 67-68
  Dave Krebs; Alexander Conrad; Milos Hauskrecht; Jingtao Wang
Physicians and nurses usually rely on hospital information systems (HIS) for detecting a variety of adverse clinical conditions and reminding repetitive treatments. However, the acquisition of alert rules expected by HIS from experts remains a challenging, error-prone, and time-consuming process. In this work, we present MARBLS (Medical Alert Rule BuiLding System) -- a visual environment to facilitate the design and definition of clinical alert rules. MARBLS enables a two-way, synchronized visual rule workspace and visual query explorer. Monitoring rules can be built by manipulating block components in the rule workspace, by querying and generalizing region of interests in the visual query explorer via direct manipulations, or a combination of both. Informal testing with doctors has shown positive feedback.
Cloudtop: a workspace for the cloud BIBAFull-Text 69-70
  Hubert Pham; Justin Mazzola Paluska; Robert C. Miller; Steve Ward
Even as users rely more on the web for their computing needs, they continue to depend on a desktop-like area for quick access to in-use resources. The traditional desktop is file-centric and prone to clutter, making it suboptimal for use in a web-dominated world. This paper introduces Cloudtop, a browser plugin that offers a lightweight workplace for temporary items, optimized around the idea that its contents originate from and will ultimately return to the web. Cloudtop improves upon the desktop by 1) implementing a simple, time-based notebook metaphor for managing clutter, 2) capturing and bundling extensible metadata for web resources, and 3) providing a platform for greater interface uniformity across sites.
Maintaining shared mental models in anesthesia crisis care with nurse tablet input and large-screen displays BIBAFull-Text 71-72
  Leslie Wu; Jesse Cirimele; Stuart Card; Scott Klemmer; Larry Chu; Kyle Harrison
In an effort to reduce medical errors, doctors are beginning to embrace cognitive aids, such as paper-based checklists. We describe the early stage design process of an interactive cognitive aid for crisis care teams. This process included collaboration with anesthesia professors in the school of medicine and observation of medical students practicing in simulated scenarios. Based on these insights, we identify opportunities to employ large-screen displays and coordinated tablets to support team performance. We also propose a system design for interactive cognitive aids intended to encourage a shared mental model amongst crisis care staff.
HaCHIStick: simulating haptic sensation on tablet pc for musical instruments application BIBAFull-Text 73-74
  Taku Hachisu; Michi Sato; Shogo Fukushima; Hiroyuki Kajimoto
In this paper, we propose a novel stick-type interface, the "HaCHIStick," for musical performance on a tablet PC. The HaCHIStick is composed of a stick with an embedded vibrotactile actuator, a visual display, and an elastic sheet on the display. By combining the kinesthetic sensation induced by striking the elastic sheet with vibrotactile sensation, the system provides natural haptic cues that enable the user to feel what they strike with the stick, such as steel or wood. This haptic interaction would enrich the user's experience when playing the instruments. The interface is regarded as a type of haptic augmented reality (AR) system, with a relatively simple setup.
TouchString: a flexible linear multi-touch sensor for prototyping a freeform multi-touch surface BIBAFull-Text 75-76
  Jiseong Gu; Geehyuk Lee
We propose the concept of prototyping a multi-touch surface of an arbitrary form using a flexible linear multi-touch sensor that we call TouchString. We defined the conceptual structure of a TouchString, and implemented an example prototype of a TouchString. We verified the feasibility of the concept by demonstrating a few basic application scenarios using the prototype.
MUST-D: multi-user see through display BIBAFull-Text 77-78
  Abhijit Karnik; Walterio Mayol-Cuevas; Sriram Subramanian
In this paper we present MUST-D, a multi-user see-through display that allows users to inspect objects behind a glass panel while projecting view-dependent information on the glass to the user. MUST-D uses liquid crystal panels to implement a multi-view see-through display space in front of physical objects.
Digital taste for remote multisensory interactions BIBAFull-Text 79-80
  Nimesha Ranasinghe; Adrian David Cheok; Hideaki Nii; Owen Noel Newton Fernando; Gopalakrishnakone Ponnampalam
We present a novel control system that enables digital stimulations of the sense of taste (gustation) on human to enhance remote multisensory interactions. The system uses two approaches to actuate taste sensations digitally: the electrical and thermal stimulations on tongue. The experimental results suggested that sourness and saltiness are the main sensations that could be evoked besides several evidences of sweet and bitter sensations.
Artisanship training using wearable egocentric display BIBAFull-Text 81-82
  Atsushi Hiyama; Yusuke Doyama; Mariko Miyashita; Eikan Ebuchi; Masazumi Seki; Michitaka Hirose
In recent years, most of traditional artisanship is declining because of aging skilled artisan and fewer successors. Therefore, methods for digital archiving of such traditional artisanship are needed. We have constructed a wearable skill-training interface that displays egocentric visual and audio information and muscle activities of an artisan. We used acceleration data of an instrument associated with the usage of the tools for evaluating the effect of proposed wearable display system. This paper introduces the concept and development of wearable egocentric display. Then briefly reports the application results in Kamisuki, Japanese traditional papermaking.
Tracking indoor location and motion for navigational assistance BIBAFull-Text 83-84
  Nektarios Paisios; Alex Rubinsteyn; Lakshminarayanan Subramanian; Matt Tierney; Vrutti Vyas
Visually impaired people have a harder time remembering their way around complex unfamiliar buildings, whilst obtaining the help of a sighted guide is not always possible or desirable. By sensing the users location and motion, however, mobile phone software can provide navigational assistance in such situations, obviating the need of human guides. We present a simple to operate and highly usable mobile navigational guide that uses Wi-Fi and accelerometer sensors to help the user repeat paths that were already walked once.
MoodMusic: a method for cooperative, generative music playlist creation BIBAFull-Text 85-86
  Jared S. Bauer; Alex Jansen; Jesse Cirimele
Music is a major element of social gatherings. However, creating playlists that suit everyone's tastes and the mood of the group can require a large amount of manual effort. In this paper, we present MoodMusic, a method to dynamically generate contextually appropriate music playlists for groups of people. MoodMusic uses speaker pitch and intensity in the conversation to determine the current 'mood'. MoodMusic then queries the online music libraries of the speakers to choose songs appropriate for that mood. This allows groups to listen to music appropriate for their current mood without managing playlists. This work contributes a novel method for dynamically creating music playlists for groups based on their music preferences and current mood.
An asymmetric communications platform for knowledge sharing with low-end mobile phones BIBAFull-Text 87-88
  Neil Patel; Scott R. Klemmer; Tapan S. Parikh
We present Awaaz.De ("give voice"), a social platform for communities to access and share knowledge using low-end mobile phones. Awaaz.De features a configurable mobile voice application organized into asynchronous voice mes-sage boards. For poor, remote and marginal communities, the voice-touchtone interface addresses the constraints of low literacy, language diversity, and affordability of only basic mobile devices. Voice content also presents a low barrier to content authoring, encouraging otherwise disconnected communities to actively participate in knowledge exchange. Awaaz.De includes a web-based administration interface for Internet-connected community managers to moderate, annotate, categorize, route, and narrow-cast voice messages. In this paper we describe the platform's design, implementation, and future directions.
AdaptableGIMP: designing a socially-adaptable interface BIBAFull-Text 89-90
  Benjamin Lafreniere; Andrea Bunt; Matthew Lount; Filip Krynicki; Michael A. Terry
We introduce the concept of a socially-adaptable interface, an interface that provides instant access to task-specific interface customizations created, edited, and documented by the application's user community. We demonstrate this concept in AdaptableGIMP, a modified version of the GIMP image editor that we have developed.
Embedding interface sketches in code BIBAFull-Text 91-92
  James Simpson; Michael Terry
This paper presents a user interface (UI) design tool, GUIIO, which uses ASCII text as its medium for rendering interface components. Like other UI design tools, GUIIO allows individuals to create and manipulate UI components as first-class objects. However, GUIIO has the advantage that its UI designs can be embedded directly within the program code itself. We implemented GUIIO as an extension to an existing development environment. As a result, developers can fluidly transition from editing code to editing the UI mock-up, with the text editor automatically switching its mode from code editing to UI editing as a function of the location of the cursor. By rendering UIs as ASCII art, GUIIO fills an important gap in the design, implementation, and revision of UIs by providing a highly portable and immediately accessible visual representation of the UI that embeds with the code itself.