HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 060708091011-111-212-112-213-113-214-114-215-115-2

Adjunct Proceedings of the 2013 ACM Symposium on User Interface Software and Technology

Fullname:Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology
Editors:Shahram Izadi; Aaron Quigley; Ivan Poupyrev; Takeo Igarashi
Location:St. Andrews, United Kingdom
Dates:2013-Oct-08 to 2013-Oct-11
Standard No:ISBN: 978-1-4503-2406-9; ACM DL: Table of Contents; hcibib: UIST13-2
Links:Conference Website
  1. UIST 2013-10-08 Volume 2
    1. Adjunct 1: demonstrations
    2. Adjunct 2: sponsor demonstrations
    3. Adjunct 3: doctoral consortium/symposium submissions
    4. Adjunct 4: posters

UIST 2013-10-08 Volume 2

Adjunct 1: demonstrations

PUCs: detecting transparent, passive untouched capacitive widgets on unmodified multi-touch displays BIBAFull-Text 1-2
  Simon Voelker; Kosuke Nakajima; Christian Thoresen; Yuichi Itoh; Kjell Ivar Øvergård; Jan Borchers
Capacitive multi-touch displays are not typically designed to detect passive objects placed on them. In fact, these systems usually contain filters to actively reject such input data. We present a technical analysis of this problem and introduce Passive Untouched Capacitive Widgets (PUCs). Unlike previous approaches, PUCs do not require power, they can be made entirely transparent, and they do not require internal electrical or software modifications. Most importantly they are detected reliably even when no user is touching them.
The nudging technique: input method without fine-grained pointing by pushing a segment BIBAFull-Text 3-4
  Shota Yamanaka; Homei Miyashita
The Nudging Technique is a new manipulation paradigm for GUIs. With traditional techniques, the user sometimes has to perform a fine-grained operation (e.g., pointing at the edge of a window to resize). When the user makes a mistake in the pointing, problems may arise such as an accidental switching of the foreground window. The nudging technique relieves the user from the fine pointing before dragging; the user just moves the cursor to a target then pushes it. Visual and acoustic feedbacks also help the user's operation. We describe two application examples: window resizing and spreadsheet cell resizing systems.
QOOK: a new physical-virtual coupling experience for active reading BIBAFull-Text 5-6
  Yuhang Zhao; Yongqiang Qin; Yang Liu; Siqi Liu; Yuanchun Shi
We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents.
Surface haptic interactions with a TPad tablet BIBAFull-Text 7-8
  Joe Mullenbach; Craig Shultz; Anne Marie Piper; Michael Peshkin; J. Edward Colgate
A TPad Tablet is a tablet computer with a variable friction touchscreen. It can create the perception of force, shape, and texture on a fingertip, enabling unique and novel haptic interactions on a flat touchscreen surface. We have created an affordable and easy to use variable friction device and have made it available through the open-hardware TPad Tablet Project. We present this device as a potential research platform as well as demonstrate two applications: remote touch communication and rapid haptic sketching.
Physink: sketching physical behavior BIBAFull-Text 9-10
  Jeremy Scott; Randall Davis
Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers.
Foreign manga reader: learn grammar and pronunciation while reading comics BIBAFull-Text 11-12
  Geza Kovacs; Robert C. Miller
Foreign-language comics are potentially an enjoyable way to learn foreign languages. However, the difficulty of reading authentic material makes them inaccessible to novice learners. We present the Foreign Manga Reader, a system that helps readers comprehend foreign-language written materials and learn multiple aspects of the language. Specifically, it generates a sentence-structure visualization to help learners understand the grammar, pronounces dialogs to improve listening comprehension and pronunciation, and translates dialogs, phrases, and words to teach vocabulary. Learners can use the system to match their experience level, giving novices access to dialog-level translations and pronunciations, and more advanced learners with access to information at the level of phrases and individual words. The annotations are automatically generated, and can be used with arbitrary written materials in several languages. A preliminary study suggests that learners find our system useful for understanding and learning from authentic foreign-language material.
Inkjet-printed conductive patterns for physical manipulation of audio signals BIBAFull-Text 13-14
  Nan-Wei Gong; Amit Zoran; Joseph A. Paradiso
In this demo paper, we present the realization of a completely aesthetically driven conductive image as a multi-modal music controller. Combining two emerging technologies -- rapid prototyping with an off-the-shelf inkjet printer using conductive ink and parametric graphic design, we are able to create an interactive surface that is thin, flat, and flexible. This sensate surface can be conformally wrapped around a simple curved surface, and unlike touch screens, can accommodate complex structures and shapes such as holes on a surface. We present the design and manufacturing flow and discuss the technology behind this multi-modal sensing design. Our work seeks to offer a new dimension of designing sonic interaction with graphic tools, playing and learning music from a visual perspective and performing with expressive physical manipulation.
Multi-touch gesture recognition by single photoreflector BIBAFull-Text 15-16
  Hiroyuki Manabe
A simple technique is proposed that uses a single photoreflector to recognize multi-touch gestures. Touch and multi-finger swipe are robustly discriminated and recognized. Further, swipe direction can be detected by adding a gradient to the sensitivity.
Flexkit: a rapid prototyping platform for flexible displays BIBAFull-Text 17-18
  David Holman; Jesse Burstyn; Ryan Brotman; Audrey Younkin; Roel Vertegaal
Commercially available development platforms for flexible displays are not designed for rapid prototyping. To create a deformable interface, one that uses a functional flexible display, designers must be familiar with embedded hardware systems and corresponding programming. We introduce Flexkit, a platform that allows designers to rapidly prototype deformable applications. With Flexkit, designers can rapidly prototype using a thin-film electrophoretic display, one that is "Plug and Play". To demonstrate Flexkit's ease-of-use, we present its application in PaperTab's design iteration as a case study. We further discuss how dithering can be used to increase the frame rate of electrophoretic displays from 1fps to 5fps.
BoardLab: PCB as an interface to EDA software BIBAFull-Text 19-20
  Pragun Goyal; Harshit Agrawal; Joseph A. Paradiso; Pattie Maes
The tools used to work with Printed Circuit Boards (PCBs), for example soldering iron, multi-meter and oscilloscope involve working directly with the board and the board components. However, the Electronic Design Automation (EDA) software used to query a PCB's design data requires using a keyboard and a mouse. These different interfaces make it difficult to connect both kinds of operations in a workflow. Further, the measurements made by tools like a multi-meter have to be understood in the context of the schematics of the board manually. We propose a solution to reduce the cognitive load of this disconnect by introducing a handheld probe that allows for direct interactions with the PCB for just-in-time information on board schematics, component datasheets and source code. The probe also doubles up as a voltmeter and annotates the schematics of the board with voltage measurements.
Glassified: an augmented ruler based on a transparent display for real-time interactions with paper BIBAFull-Text 21-22
  Anirudh Sharma; Lirong Liu; Pattie Maes
We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane. A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.
FlexStroke: a jamming brush tip simulating multiple painting tools on digital platform BIBAFull-Text 23-24
  Xin Liu; Haijun Xia; Jiawei Gu
We propose a new system to enable the real painting experience on digital platform and extend it to multi-strokes for different painting needs. In this paper, we describe how the FlexStroke is used as Chinese brush, oil brush and crayon with changes of its jamming tip. This tip has different levels of stiffness based on its jamming structure. Visual simulations on PixelSense jointly enhance the intuitive painting process with highly realistic display results.

Adjunct 2: sponsor demonstrations

AIREAL: tactile interactive experiences in free air BIBAFull-Text 25-26
  Rajinder Sodhi; Matthew Glisson; Ivan Poupyrev
AIREAL is a novel haptic technology that delivers effective and expressive tactile sensations in free air, without requiring the user to wear a physical device. Combined with interactive computers graphics, AIREAL enables users to feel virtual 3D objects, experience free air textures and receive haptic feedback on gestures performed in free space. AIREAL relies on air vortex generation directed by an actuated flexible nozzle to provide effective tactile feedback with a 75 degrees field of view, and within an 8.5cm resolution at 1 meter. AIREAL is a scalable, inexpensive and practical free air haptic technology that can be used in a broad range of applications, including gaming, mobile applications, and gesture interaction among many others. This paper reports the details of the AIREAL design and control, experimental evaluations of the device's performance, as well as an exploration of the application space of free air haptic displays. Although we used vortices, we believe that the results reported are generalizable and will inform the design of haptic displays based on alternative principles of free air tactile actuation.
Ambient surface: enhancing interface capabilities of mobile objects aided by ambient environment BIBAFull-Text 27-28
  Taik Heon Rhee; Minkyu Jung; Sungwook Baek; Hyun-Jin Kim; Sungbin Kuk; Seonghoon Kang; Hark-Joon Kim
We introduce Ambient Surface, an interactive surrounded equipment for enhancing interface capabilities of mobile devices placed on an ordinary surface. Object information and a user's interaction are captured by 2D/3D cameras, and appropriate feedback images are projected on the surface. By the help of the ambient system, we may not only provide a wider screen for mobile devices with a limited screen size, but also allow analog objects to dynamically interact with users. We believe that this demo will help interaction designers to draw new inspiration of utilizing mobile objects with ambient environment.

Adjunct 3: doctoral consortium/symposium submissions

Pixel-based reverse engineering of graphical interfaces BIBAFull-Text 29-32
  Morgan Dixon
My dissertation proposes a vision in which anybody can modify any interface of any application. Realizing this vision is difficult because of the rigidity and fragmentation of current interfaces. Specifically, rigidity makes it difficult or impossible for a designer to modify or customize existing interfaces. Fragmentation results from the fact that people generally use many different applications built with a variety of toolkits. Each is implemented differently, so it is difficult to consistently add new functionality. As a result, researchers are often limited to demonstrating new ideas in small testbeds, and practitioners often find it difficult to adopt and deploy ideas from the literature. In my dissertation, I propose transcending the rigidity and fragmentation of modern interfaces by building upon their single largest commonality: that they ultimately consist of pixels painted to a display. Building from this universal representation, I propose pixel-based interpretation to enable modification of interfaces without their source code and independent of their underlying toolkit implementation.
Augmenting the input space of portable displays using add-on hall-sensor grid BIBAFull-Text 33-36
  Rong-Hao Liang
Since handheld and wearable displays are highly mobile, various applications are enabled to enrich our daily life. In addition to displaying high-fidelity information, these devices also support natural and effective user interactions by exploiting the capability of various embedded sensors. Nonetheless, the set of built-in sensors has limitations. Add-on sensor technologies, therefore, are needed. This work chooses to exploit magnetism as an additional channel of user input. The author first explains the reasons of developing the add-on magnetic field sensing technology based on neodymium magnets and the analog Hall-sensor grid. Then, the augmented input space is showcased through two instances. 1) For handheld displays, the sensor extends the object tracking capability to the near-surface 3D space by simply attaching it to the back of devices. 2) For wearable displays, the sensor enables private and rich-haptic 2D input by wearing it on user's fingernails. Limitations and possible research directions of this approach are highlighted in the end of paper.
Cross-device eye-based interaction BIBAFull-Text 37-40
  Jayson Turner
Eye-tracking technology is envisaged to become part of our daily life, as its development progresses it becomes more wearable. Additionally there is a wealth of digital content around us, either close to us, on our personal devices or out-of-reach on public displays. The scope of this work aims to combine gaze with mobile input modalities to enable the transfer of content between public and close proximity personal displays. The work contributes enabling technologies, novel interaction techniques, and poses bigger questions that move toward a formalisation of this design space to develop guidelines for the development of future cross-device eye-based interaction methods.
Enabling an ecosystem of personal behavioral data BIBAFull-Text 41-44
  Jason Wiese
Almost every computational system a person interacts with keeps a detailed log of that person's behavior. The possibility of this data promises a breadth of new service opportunities for improving people's lives through deep personalization, tools to manage aspects of their personal wellbeing, and services that support identity construction. However, the way that this data is collected and managed today introduces several challenges that severely limit the utility of this rich data.
   This thesis maps out a computational ecosystem for personal behavioral data through the design, implementation, and evaluation of Phenom, a web service that factors out common activities in making inferences from personal behavioral data. The primary benefits of Phenom include: a structured process for aggregating and representing user data; support for developing models based on personal behavioral data; and a unified API for accessing inferences made by models within Phenom. To evaluate Phenom for ease of use and versatility, an external set of developers will create example applications with it.
Exploring back-of-device interaction BIBAFull-Text 45-48
  Mohammad Faizuddin Mohd Noor
Back of device interaction is gaining popularity as an alternative input modality in mobile devices, however it is still unclear how the back of device is related to other interactions. My research explores the relationship between hand grip from the back of the device and other interactions. In order to investigate this relationship, I will use touch target application to study hand grip patterns, then analyse the correlation that exists between touch target and hand grip. Finally I will explore the possibilities offered when the relationship between the touch target and hand grip is established in a quantifiable way.
Sensor design and interaction techniques for gestural input to smart glasses and mobile devices BIBAFull-Text 49-52
  Andrea Colaço
Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation -- which the user performs effortlessly using a mouse and keyboard -- require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice.
   In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.
Identifying emergent behaviours from longitudinal web use BIBAFull-Text 53-56
  Aitor Apaolaza
Laboratory studies present difficulties in the understanding of how usage evolves over time. Employed observations are obtrusive and not naturalistic. Our system employs a remote capture tool that provides longitudinal low-level interaction data. It is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive. Web application interfaces are designed assuming users' goals. Requirement specifications contain well defined use cases and scenarios that drive design and subsequent optimisations. Users' interaction patterns outside the expected ones are not considered. This results in an optimisation for a stylised user rather than a real one. A bottom-up analysis from low-level interaction data makes possible the emergence of users' tasks. Similarities among users can be found and solutions that are effective for real users can be designed. Factors such as learnability and how interface changes affect users are difficult to observe in laboratory studies. Our solution makes it possible, adding a longitudinal point of view to traditional laboratory studies. The capture tool is deployed in real world Web applications capturing in-situ data from users. These data serve to explore analysis and visualisation possibilities. We present an example of the exploration results with one Web application.
Integrated visual representations for programming with real-world input and output BIBAFull-Text 57-60
  Jun Kato
As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The real-world I/O usually contains complex data hardly represented by text or symbols, while most of the current integrated development environments (IDEs) are equipped with text-based editors and debuggers. My thesis investigates how visual representations of the real world can be integrated within the text-based development environment to enhance the programming experience. In particular, we have designed and implemented IDEs for three scenarios, all of which make use of photos and videos representing the real world. Based on these experiences, we discuss "programming with example data," a technique where the programmer demonstrates examples to the IDE and writes text-based code with support of the examples.

Adjunct 4: posters

A cluster information navigate method by gaze tracking BIBAFull-Text 61-62
  Dawei Cheng; Danqiong Li; Liang Fang
According to the rapid growth of data volume, it's increasingly complicated to present and navigate large amount of data in a convenient method on mobile devices with a small screen. To address this challenge, we present a new method which displays cluster information in a hierarchy pattern and interact with them by eyes' movement captured by the front camera of mobile devices. The key of this system is providing users a new interacting method to navigate and select data quickly by eyes without any additional equipment.
NailSense: fingertip force as a new input modality BIBAFull-Text 63-64
  Sungjae Hwang; Dongchul Kim; Sang-won Leigh; Kwang-yun Wohn
In this paper, we propose a new interaction technique, called NailSense, which allows users to control a mobile device by hovering and slightly bending/extending fingers behind the device. NailSense provides basic interactions equivalent to that of touchscreen interactions; 2-D locations and binary states (i.e., touch or released) are tracked and used for input, but without any need of touching on the screen. The proposed technique tracks the user's fingertip in real-time and triggers event on color change in the fingernail area. It works with conventional smartphone cameras, which means no additional hardware is needed for its utilization. This novel technique allows users to use mobile devices without occlusion which was a crucial problem in touchscreens, also promising extended interaction space in the air, on desktop, or in everywhere. This new interaction technique is tested with example applications: a drawing app and a web browser.
Multi-perspective multi-layer interaction on mobile device BIBAFull-Text 65-66
  Maryam Khademi; Mingming Fan; Hossein Mousavi Hondori; Cristina Videira Lopes
We propose a novel multi-perspective multi-layer interaction using a mobile device, which provides an immersive experience of 3D navigation through an object. The mobile device serves as a window, through which the user can observe the object in detail from various perspectives by orienting the device differently. Various layers of the object can also be shown while users move the device away and toward themselves. Our approach is real-time, completely mobile (running on Android) and does not depend on external sensor/displays (e.g., camera and projector).
A touchless passive infrared gesture sensor BIBAFull-Text 67-68
  Piotr Wojtczuk; David Binnie; Alistair Armitage; Tim Chamberlain; Carsten Giebeler
A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 µW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories -- up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.
DDMixer2.5D: drag and drop to mix 2.5D video objects BIBAFull-Text 69-70
  Tatsuya Kurihara; Makoto Okabe; Rikio Onai
We propose a 2.5D video editing system called DDMixer2.5D. 2.5D video contains not only color channels but also a depth channel, which can be recorded easily using recently available depth sensors, such as Microsoft Kinect. Our system employs this depth channel to allow a user to quickly and easily edit video objects by using simple drag-and-drop gestures. For example, a user can copy a video object of a dancing figure from video to video simply by dragging and dropping using finger on the touch screen of a mobile phone handset. In addition, the user can drag to adjust the 3D position in the new video so that contact between foot and floor is preserved and the size of the body is automatically adjusted according to the depth. DDMixer2.5D has other useful functions required for practical use, including object removal, editing 3D camera path, creating of anaglyph 3D video, as well as a timeline interface.
Shape changing device for notification BIBAFull-Text 71-72
  Kazuki Kobayashi; Seiji Yamada
In this paper, we describe a notification method with peripheral cognition technology that uses a human cognitive characteristic. The method achieves notification without interrupting users' primary tasks. We developed a shape changing device that change its shape to notify the arrival of information. Such behavior enables a user to easily find and accept notifications without interruption when their attention on the primary task decreases. The result of an experiment showed that the successful notification rate was 45.5%.
BitWear: a platform for small, connected, interactive devices BIBAFull-Text 73-74
  Kent Lyons; David H. Nguyen; Shigeyuki Seki; Sean White; Daniel Ashbrook; Halley Profita
We describe BitWear, a platform for prototyping small, wireless, interactive devices. BitWear incorporates hardware, wireless connectivity, and a cloud component to enable collections of connected devices. We are using this platform to create, explore, and experiment with a multitude of wearable and deployable physical forms and interactions.
Hanzi Lamp: an intelligent guide interface for Chinese character learning BIBAFull-Text 75-76
  Yujie Hong; Lei Shi; Fangtian Ying
In recent years, an increasing number of people want to understand Chinese culture and Hanzi (Chinese characters) is a key to that. Learning Chinese characters as a second language can be quite challenging. What confuses learners is not only the meaning of Hanzi, but also the complicated writing rules since they are very different from the alphabetic ways English uses. Although many mobile applications and online learning systems provide Hanzi teaching interfaces, they are restricted to the two-dimensional screens and thus they offer little flexibility for practicing while learning. In this paper, we propose Hanzi Lamp, an intelligent guide interface which allows users to practice writing under real-time and adaptive projected guidance. Information captured by sensors has been utilized to perceive learners' behaviors and make appropriate response. We explore how we can enhance Chinese characters learning by improving the system's understanding of physical learning environment.
Detecting student frustration based on handwriting behavior BIBAFull-Text 77-78
  Hiroki Asai; Hayato Yamana
Detecting states of frustration among students engaged in learning activities is critical to the success of teaching assistance tools. We examine the relationship between a student's pen activity and his/her state of frustration while solving handwritten problems. Based on a user study involving mathematics problems, we found that our detection method was able to detect student frustration with a precision of 87% and a recall of 90%. We also identified several particularly discriminative features, including writing stroke number, erased stroke number, pen activity time, and air stroke speed.
eyeCan: affordable and versatile gaze interaction BIBAFull-Text 79-80
  Sang-won Leigh
We present eyeCan, a software system that promises rich, sophisticated, and still usable gaze interactions with low-cost gaze tracking setups. The creation of this practical system was to drastically lower the hurdle of gaze interaction by presenting easy-to-use gaze gestures, and by reducing the cost-of-entry with the utilization of low precision gaze trackers. Our system effectively compensates for the noise from tracking sensors and involuntary eye movements, boosting both the precision and speed in cursor control. Also the possible variety of gaze gestures was explored and defined. By combining eyelid actions and gaze direction cues, our system provides rich set of gaze events and therefore enables the use of sophisticated applications e.g. playing video games or navigating street view.
Augmenting braille input through multitouch feedback BIBAFull-Text 81-82
  Hugo Nicolau; Kyle Montague; João Guerreiro; Diogo Marques; Tiago Guerreiro; Craig Stewart; Vicki Hanson
Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.
H-Studio: an authoring tool for adding haptic and motion effects to audiovisual content BIBAFull-Text 83-84
  Fabien Danieau; Jérémie Bernon; Julien Fleureau; Philippe Guillotel; Nicolas Mollet; Marc Christie; Anatole Lécuyer
Haptic and motion effects have been widely used for virtual reality applications in order to provide a physical feedback from the virtual world. Such feedback was recently studied to improve the user experience in audiovisual entertainment applications. But the creation of haptic and motion effects is a main issue and requires dedicated editing tool. This paper describes a user-friendly authoring tool to create and synchronize such effects with audiovisual content. More precisely we focus on the edition of motion effects. Authoring is simplified thanks to a dedicated graphical user interface, allowing either to import external data or to synthesize effects thanks to a force-feedback device. Another key feature of this editor is the playback function which enables to preview the motion effect. Hence this new tool allows non expert users to create immersive haptic-audiovisual experiences.
WebNexter: dynamic guided tours for screen readers BIBAFull-Text 85-86
  Prathik Gadde; Davide Bolchini
Recent research has shown that screen-reader users can find information on a website almost twice as fast if they bypass indexes and just navigate the content pages of a collection linearly (in a guided-tour fashion). Yet manually building a guided tour for each existing index requires significant resources from web developers, especially for very large web applications. To address this problem, we introduce WebNexter, a web browser extension that automatically generates guided tours from the indexes present in the page a screen-reader user is currently visiting. WebNexter is manifest in a Google Chrome extension that implements screen-reader accessible, dynamic construction of guided tours from a very large, eCommerce website prototype. Our goal is to develop WebNexter extensions for multiple browsers that will work on any website; this will relieve developers from the burden of designing guided tours while greatly accelerating the screen-reader navigation experience during fact-finding.
An assembly of soft actuators for an organic user interface BIBAFull-Text 87-88
  Yoshiharu Ooide; Hiroki Kawaguchi; Takuya Nojima
An organic user interface (OUI) is a kind of interface that is based on natural human-human and human-physical object interaction models. In such situations, hair and fur play important roles in establishing smooth and natural communication. Animals and birds use their hair, fur and feathers to express their emotions, and groom each other when forming closer relationships. Therefore, hair and fur are potential materials for development of the ideal OUI. In this research, we propose the hairlytop interface, which is a collection of hair-like units composed of shape memory alloys, for use as an OUI. The proposed interface is capable of improving its spatial resolution and can be used to develop a hair surface on any electrical device shape.
Crowdboard: an augmented whiteboard to support large-scale co-design BIBAFull-Text 89-90
  Salvatore Andolina; Daniel Lee; Steven Dow
Co-design efforts attempt to account for many diverse viewpoints. However, design teams lack support for meaningful real-time interaction with a large community of potential stakeholders. We present Crowdboard, a novel whiteboard system that enables many potential stakeholders to provide real-time input during early-stage design activities, such as concept mapping. Local design teams develop ideas on a standard whiteboard, which is augmented with annotations and comments from online participants. The system makes it possible for design teams to solicit real-time opinions and ideas from a community of people intrinsically motivated to shape the product/service.
Ta-Tap: consecutive distant tap operations for one-handed touch screen use BIBAFull-Text 91-92
  Seongkook Heo; Geehyuk Lee
Tapping on the same point twice is a common operation known as double tap, but tapping on distant points in sequence is underutilized. In this poster we explore the potential uses of consecutive distant tap operations, which we call Ta-Tap. As a single-touch operation, it is expected to be particularly useful for single-handed touch screen use. We examined three possible uses of Ta-Tap: simulating multi-touch operations, invoking a virtual scroll wheel, and invoking a pie-menu. We verified the feasibility of Ta-Tap through the experiment.
Visimu: a game for music color label collection BIBAFull-Text 93-94
  Borui Wang; Jingshu Chen
Based on previous studies of the associations between color and music, we introduce a scalable way of using colors to label songs and a visualization of music archives that facilitates music exploration. We present Visimu, an online game that attracted users to generate 926 color labels for 102 songs, with over 75% of the songs having color labels reaching high consensus in the Lab color space. We implemented a music archive visualization using the color labels generated by Visimu, and conducted an experiment to show that labeling music by color is more effective than text tags when the user is looking for songs of a particular mood or use scenario. Our results showed that Visimu is effective to produce meaningful color labels for music mood classification, and such approach enables a wide range of applications for music visualization and discovery.
User created tangible controls using ForceForm: a dynamically deformable interactive surface BIBAFull-Text 95-96
  Jessica Tsimeris; Duncan Stevenson; Matt Adcock; Tom Gedeon; Michael Broughton
Touch surfaces are common devices but they are often uniformly flat and provide little flexibility beyond changing the visual information communicated to the user via software. Furthermore, controls for interaction are not tangible and are usually specified and placed by the user interface designer. Using ForceForm, a dynamically deformable interactive surface, the user is able to directly sculpt the surface to create tangible controls with force feedback properties. These controls can be made according to the user's specifications, and can then be relinquished when no longer needed. We describe this method of interaction, provide an implementation of a slider, and ideas for further controls.
Asymmetric cores for low power user interface systems BIBAFull-Text 97-98
  Jaeyeon Kihm; François Guimbretière
In recent years, advances in hardware design have lead to significant improvements in the battery life of everyday information appliances. In particular, application processors increasingly include low power "helper" cores dedicated to simpler tasks. Using a custom board design, Guimbretière et al. [2], demonstrated that such helper cores can also be used to execute simple user interface tasks. We revisit their approach by implementing a similar system on an off-the-shelf application processor (TI OMAP4), and demonstrate that, in many cases, the gains reported by Guimbretière et al. [2], can be achieved by simply having the helper core dispatch input events. This new approach can be implemented by merely changing the toolkit infrastructure, thus greatly simplifying deployment.
Visualizing web browsing history with barcode chart BIBAFull-Text 99-100
  Borui Wang; Ningxia Zhang; Jianfeng Hu; Zheng Shen
Inspired by the DNA art, we introduce a data visualization technique called barcode chart, which uses color-illuminated stripes that resemble barcodes to visualize temporal data. Barcode chart excels at demonstrating high-level patterns in highly segmented temporal data, while retaining details in the data through interaction. We demonstrate Yogurt, a browser extension that implements barcode chart to visualize online browsing history. We conducted a user study and analyzed the effectiveness of using barcode chart for Yogurt in comparison with other applications. We conclude that barcode chart satisfies the need of visualizing the high-density and high-fragmentation nature of temporal data in Yogurt, and helps reveal online distraction and other web browsing patterns.
Wheels in motion: inertia sensing in roller derby BIBAFull-Text 101-102
  Craig Stewart; Penny Traitor; Vicki L. Hanson
The recent resurgence of Roller Derby has seen the game progress to an elite level with leagues becoming increasingly competitive and taking a more structured and athletic approach to training. Leagues that the authors are involved in have expressed a desire for an objective measure of basic skills and a way to monitor improvements in performance especially amongst junior skaters. This paper details the construction of an inertia-sensing platform designed to be safe to wear by skaters. We have identified a skating manoeuvre, the "crossover" that can be automatically detected using a simple filtering and thresholding procedure. We also report on some initial results in automatically detecting when a crossover occurs and provide details of our future work.
LivingClay: particle actuation to control display volume and stiffness BIBAFull-Text 103-104
  Jefferson Pardomuan; Toshiki Sato; Hideki Koike
We present a new type of display actuation that is able to control both display geometry and stiffness properties using a filler material and air flow control technique. The display consists of a flat, flexible layer of cells on the surface and chamber filled with particles under it. Display geometries can be changed by transporting an amount of particles between display cells and the particle chamber using pressured air and vacuum to control the air flows. This system also allow for variable stiffness using vacuum technique to harden the particles inside chamber. In this paper, we present the design and control technique of this new type actuator and also possible interaction on a single actuator display. We also propose a low-cost, effective way to control an array of actuators where the air flow line and particle line are arranged in a multiplexed grid configuration.
Brainstorm, define, prototype: timing constraints to balance appropriate and novel design BIBAFull-Text 105-106
  Andrew Nicholas Elder; Elaine Zhou
We present the results of a human creativity experiment that examined the effect of varying the timing of narrowed constraints. Participants were asked to create a static web ad for Stanford University guided under a timed design process and were introduced to a narrowed constraint either at the beginning, middle, or end of the prototyping process. The narrow constraint addressed goal and task constraints by specifying the target audience and ad size. We find that groups introduced to narrow constraints prior to the brainstorm yielded more appropriate results, while those introduced prior to the final production yielded more novel results. Our results suggest that effective timing of design constraints may further optimize ideation and design methodologies.
FingerSkate: making multi-touch operations less constrained and more continuous BIBAFull-Text 107-108
  Jeongmin Son; Geehyuk Lee
Multi-touch operations are sometimes difficult to perform due to musculoskeletal constraints. We propose FingerSkate, a variation to the current multi-touch operations to make them less constrained and more continuous. With FingerSkate, once one starts a multi-touch operation, one can continue the operation without having to maintain both fingers on the screen. In a pilot study, we observe that participants could learn to FingerSkate easily and were utilizing the new technique actively.
Obake: interactions on a 2.5D elastic display BIBAFull-Text 109-110
  Dhairya Dand; Robert Hemsley
In this poster we present an interaction language for the manipulation of an elastic deformable 2.5D display. We discuss a range of gestures to interact and directly deform the surface. To demonstrate these affordances and the associated interactions, we present a scenario of a topographic data viewer using this prototype system.
BackTap: robust four-point tapping on the back of an off-the-shelf smartphone BIBAFull-Text 111-112
  Cheng Zhang; Aman Parnami; Caleb Southern; Edison Thomaz; Gabriel Reyes; Rosa Arriaga; Gregory D. Abowd
We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user's pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.
Haptic props: semi-actuated tangible props for haptic interaction on the surface BIBAFull-Text 113-114
  Dimitar Valkov; Andreas Mantler; Klaus Hinrichs
While multiple methods to extend the expressiveness of tangible interaction have been proposed, e. g., self-motion, stacking and transparency, providing haptic feedback to the tangible prop itself has rarely been considered. In this poster we present a semi-actuated, nano-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect changes in the programmed level of friction and received some promising results.