HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 08091011-111-212-112-213-113-214-114-215-115-2

Adjunct Proceedings of the 2014 ACM Symposium on User Interface Software and Technology

Fullname:UIST'14: Adjunct Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology
Editors:Hrvoje Benko; Mira Dontcheva; Daniel Wigdor
Location:Honolulu, Hawaii
Dates:2014-Oct-05 to 2014-Oct-08
Standard No:ISBN: 978-1-4503-3068-8; ACM DL: Table of Contents; hcibib: UIST14-2
Links:Conference Website
  1. UIST 2014-10-05 Volume 2
    1. Doctoral symposium
    2. Demonstrations
    3. Posters

UIST 2014-10-05 Volume 2

Doctoral symposium

Scalable methods to collect and visualize sidewalk accessibility data for people with mobility impairments BIBAFull-Text 1-4
  Kotaro Hara
Poorly maintained sidewalks pose considerable accessibility challenges for mobility impaired persons; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, I introduce four threads of research that I will conduct for my Ph.D. thesis aimed at creating new methods and tools to provide unprecedented levels of information on the accessibility of streets and sidewalk. Namely, I will (i) conduct a formative study to better understand accessibility problems, (ii) develop and evaluate scalable map-based data collection methods, (iii) integrate computer vision algorithms to increase the scalability of the methods, and (iv) develop accessible-aware map-based tools that demonstrate the utility of our data (Figure 1 and 6).
Depth based interaction and field of view manipulation for augmented reality BIBAFull-Text 5-8
  Jason Orlosky
In recent years, the market for portable devices has seen a large increase in the development of head mounted displays. While these displays provide many benefits to users, safety is still a concern. In particular, ensuring that content does not interfere with everyday activities and that users have adequate peripheral vision is very important for situational awareness. In this paper, I address these issues through the use of two novel display prototypes. The first is an optical see-through multi-focal plane display combined with an eye tracking interface. Through eye tracking and knowledge of the focal plane distances, I can calculate whether a user is looking at the environment or at a focal plane in the display. Any distracting text can then be quickly removed so that he or she has a clear view of the environment. The second prototype is a video see-through display which expands a user's environmental view through the use of 238° ultra wide field of view fisheye lenses. Based on the results of several initial evaluations, these new interfaces have the potential help users improve environmental awareness.
Leveraging physical human actions in large interaction spaces BIBAFull-Text 9-12
  Can Liu
Large interaction spaces such as wall-size displays allow users to interact not only with their hands, like traditional desktop environment, but also with their whole body by, e.g. walking or moving their head orientation. While this is particularly suitable for tasks where users need to navigate large amounts of data and manipulate them at the same time, we still lack a deep understanding of the advantages of large displays for such tasks. My dissertation begins with a set of studies to understand the benefits and drawbacks of a high-resolution wall-size display vs. a desktop environments. The results show strong benefits of the former due to the flexibility of "physical navigation" involving the whole body when compared with mouse input. From whole-body interaction to human-to-human interaction, my current work seeks to leverage natural human actions to collaborative contexts and to design interaction techniques that detects gestural interactions between users to support collaborative data exchange.
Using brain-computer interfaces for implicit input BIBAFull-Text 13-16
  Daniel Afergan
Passive brain-computer interfaces, in which implicit input is derived from a user's changing brain activity without conscious effort from the user, may be one of the most promising applications of brain-computer interfaces because they can improve user performance without additional effort on the user's part. I seek to use physiological signals that correlate to particular brain states in order to adapt an interface while the user behaves normally. My research aims to develop strategies to adapt the interface to the user and the user's cognitive state using functional near-infrared spectroscopy (fNIRS), a non-invasive, lightweight brain-sensing technique. While passive brain-computer interfaces are currently being developed and researchers have shown their utility, there has been little effort to develop a framework or hierarchy for adaptation strategies.
Interacting with massive numbers of student solutions BIBAFull-Text 17-20
  Elena L. Glassman
When teaching programming or hardware design, it is pedagogically valuable for students to generate examples of functions, circuits, or system designs. Teachers can be overwhelmed by these types of student submissions when running large residential or recently released massive online courses. The underlying distribution of student solutions submitted in response to a particular assignment may be complex, but the newly available volume of student solutions represents a denser sampling of that distribution. Working with large datasets of students' solutions, I am building systems with user interfaces that allow teachers to explore the variety of their students' correct and incorrect solutions. Forum posts, grading rubrics, and automatic graders can be based on student solution data, and turn massive engineering and computer science classrooms into useful insight and feedback for teachers. In the development process, I hope to describe essential design principles for such systems.
Powering interactive intelligent systems with the crowd BIBAFull-Text 21-24
  Walter S. Lasecki
Creating intelligent systems that are able to recognize a user's behavior, understand unrestricted spoken natural language, complete complex tasks, and respond fluently could change the way computers are used in daily life. But fully-automated intelligent systems are a far-off goal -- currently, machines struggle in many real-world settings because problems can be almost entirely unconstrained and can vary greatly between instances. Human computation has been shown to be effective in many of these settings, but is traditionally applied in an offline, batch-processing fashion. My work focuses on a new model of continuous, real-time crowdsourcing that enables interactive crowd-powered systems.
Making distance matter: leveraging scale and diversity in massive online classes BIBAFull-Text 25-28
  Chinmay Kulkarni
The large scale of online classes and the diversity of the students that participate in them can enable new educational systems. This massive scale and diversity can enable always-available systems that help students share diverse ideas, and inspire and learn from each other. We introduce systems for two core educational processes at scale: discussion and assessment. To date, several thousand students in a dozen online classes have used our discussion system. Controlled experiments suggest that participants in more diverse discussions perform better on tests and that discussion improves engagement. Similarly, more than 100,000 students have reviewed peer work for both summative assessment and feedback. Through these systems, we argue that to create new educational experiences at scale, pedagogical strategies and software that leverage scale and diversity must be co-developed. More broadly, we suggest the key to creating new educational experiences online lies in leveraging massive networks of peers.
Matter matters: offloading machine computation to material computation for shape changing interfaces BIBAFull-Text 29-32
  Lining Yao
This paper introduces material computation to offload computing from machine to material, in the process of creating shape-changing output. It contains the explanation on the mechanism of transformation, the concept of material computation, the summary and analysis of literature research within and beyond the HCI field, the interaction loop integrating material computation, and my own practice in material computation technics and applications.


A three-step interaction pattern for improving discoverability in finger identification techniques BIBAFull-Text 33-34
  Alix Goguey; Géry Casiez; Daniel Vogel; Fanny Chevalier; Thomas Pietrzak; Nicolas Roussel
Identifying which fingers are in contact with a multi-touch surface provides a very large input space that can be leveraged for command selection. However, the numerous possibilities enabled by such vast space come at the cost of discoverability. To alleviate this problem, we introduce a three-step interaction pattern inspired by hotkeys that also supports feed-forward. We illustrate this interaction with three applications allowing us to explore and adapt it in different contexts.
A rapid prototyping toolkit for touch sensitive objects using active acoustic sensing BIBAFull-Text 35-36
  Makoto Ono; Buntarou Shizuki; Jiro Tanaka
We present a prototyping toolkit for creating touch sensitive prototypes from everyday objects without needing special skills such as code writing or designing circuits. This toolkit consists of an acoustic based touch sensor module that captures the resonant properties of objects, software modules including one that recognizes how an object is touched by using machine learning, and plugins for visual programming environments such as Scratch and Max/MSP. As a result, our toolkit enables users to easily configure the response of touches using a wide variety of visual or audio responses. We believe that our toolkit expands the creativity of a non-specialist, such as children and media artists.
Video text retouch: retouching text in videos with direct manipulation BIBAFull-Text 37-38
  Laurent Denoue; Scott Carter; Matthew Cooper
Video Text Retouch is a technique for retouching textual content found in many online videos such as screencasts, recorded presentations and many online e-learning videos. Viewed through our special, HTML5-based player, users can edit in real-time the textual content of the video frames, such as correcting typos or inserting new words between existing characters. Edits are overlaid and tracked at the desired position for as long as the original video content remains similar. We describe the interaction techniques, image processing algorithms and give implementation details of the system.
Inkantatory paper: dynamically color-changing prints with multiple functional inks BIBAFull-Text 39-40
  Takahiro Tsujii; Naoya Koizumi; Takeshi Naemura
We propose an effective combination of multiple functional inks, including conductive silver ink, thermo-chromic ink, and regular inkjet ink, for a novel paper-based interface called Inkantatory Paper that can dynamically change the color of its printed pattern. Constructed with off-the-shelf inkjet printing using silver conductive ink, our system enables users to fabricate thin, flat, flexible, and low-cost interactive paper. We evaluated the characteristics of the conductive silver ink as a heating system for the thermo-chromic ink and created applications demonstrating the usability of the system.
StackBlock: block-shaped interface for flexible stacking BIBAFull-Text 41-42
  Masahiro Ando; Yuichi Itoh; Toshiki Hosoi; Kazuki Takashima; Kosuke Nakajima; Yoshifumi Kitamura
We propose a novel building-block interface called StackBlock that allows users to precisely construct 3D shapes by stacking blocks at arbitrary positions and angles. Infrared LEDs and phototransistors are laid in a matrix on each surface of a block to detect the areas contacted by other blocks. Contact-area information is transmitted to the bottom block by the relay of infrared communication between the stacked blocks, and then the bottom block sends all information to the host computer for recognizing the 3D shape. We implemented a prototype of StackBlock with several blocks and evaluated the accuracy and latency of 3D shape recognition. As a result, StackBlock could sufficiently perform 3D shape recognition for users' flexible stacking.
A pen-based device for sketching with multi-directional traction forces BIBAFull-Text 43-44
  Junichi Yamaoka; Yasuaki Kakehi
This paper presents a pen-grip-shaped device that assists in sketching using multi-directional traction forces. By using an asymmetric acceleration of the vibration actuator that drive in a linear direction, the system can create a virtual traction force with the proper direction. We augment users' drawing skills with the device that arranged 4 vibration actuators that provides a traction force and a rotary sensation. Therefore the device is portable and does not have any limitation of needing to be in a particular location, this device can be used to guide the direction and assist the user who is sketching on a large piece of paper. Moreover, users can attach it to any writing utensil such as brushes, crayons. In this paper, we describe the details of the design of device, evaluation experiments, and applications.
Tangible and modular input device for character articulation BIBAFull-Text 45-46
  Alec Jacobson; Daniele Panozzo; Oliver Glauser; Cedric Pradalier; Otmar Hilliges; Olga Sorkine-Hornung
We present a modular, novel mechanical device for animation authoring. The pose of the device is sensed at interactive rates, enabling quick posing of characters rigged with a skeleton of arbitrary topology. The mapping between the physical device and virtual skeleton is computed semi-automatically guided by sparse user correspondences. Our demonstration allows visitors to experiment with our device and software, choosing from a variety of characters to control.
Digital flavor interface BIBAFull-Text 47-48
  Nimesha Ranasinghe; Gajan Suthokumar; Kuan Yi Lee; Ellen Yi-Luen Do
This demo presents a unique technology to enable digital simulation of flavors. The Digital Flavor Interface, a digital control system, is developed to stimulate taste (using electrical and thermal stimulation methodologies on the human tongue) and smell (using a controlled scent emitting mechanism) senses simultaneously, thus simulating different virtual flavors. A preliminary user experiment was conducted to investigate the effectiveness of this approach with five distinct flavor stimuli. The experimental results suggested that the users' were effectively able to identify different flavors such as minty, spicy, and lemon flavor. In summary, our work demonstrates a novel controllable digital flavor instrument, which may be utilized in interactive computer systems for rendering virtual flavors.
Interactive exploration and selection in volumetric datasets with color tunneling BIBAFull-Text 49-50
  Christophe Hurter; A. Russel Taylor; Sheelagh Carpendale; Alexandru Telea
Interactive data exploration and manipulation are often hindered by dataset sizes. For 3D data, this is aggravated by occlusion, important adjacencies, and entangled patterns. Such challenges make visual interaction via common filtering techniques hard. We describe a set of real-time multi-dimensional data deformation techniques that aim to help users to easily select, analyze, and eliminate spatial and data patterns. Our techniques allow animation between view configurations, semantic filtering and view deformation. Any data subset can be selected at any step along the animation. Data can be filtered and deformed to reduce occlusion and ease complex data selections. Our techniques are simple to learn and implement, flexible, and real-time interactive with datasets of tens of millions of data points. We demonstrate our techniques on three domain areas: 2D image segmentation and manipulation, 3D medical volume exploration, and astrophysical exploration.
FeelCraft: crafting tactile experiences for media using a feel effect library BIBAFull-Text 51-52
  Siyan Zhao; Oliver Schneider; Roberta Klatzky; Jill Lehman; Ali Israr
FeelCraft is a media plugin that monitors events and states in the media and associates them with expressive tactile content using a library of feel effects (FEs). A feel effect (FE) is a user-defined haptic pattern that, by virtue of its connection to a meaningful event, generates dynamic and expressive effects on the user's body. We compiled a library of more than fifty FEs associated with common events in games, movies, storybooks, etc., and used them in a sandbox-type gaming platform. The FeelCraft plugin allows a game designer to quickly generate haptic effects, associate them to events in the game, play them back for testing, save them and/or broadcast them to other users to feel the same haptic experience. Our demonstration shows an interactive procedure for authoring haptic media content using the FE library, playing it back during interactions in the game, and broadcasting it to a group of guests.
SikuliBot: automating physical interface using images BIBAFull-Text 53-54
  Jeeeun Kim; Mike Kasper; Tom Yeh; Nikolaus Correll
We present SikuliBot, an image-based approach to automating user interface. SikuliBot extends the visual programming concept of Sikuli Script[2] from the graphical UIs to the real world of physical UIs, such as mobile devices' touch-screens and hardware buttons. The key to our approach is using a physical robot to see an interface, identify a target, and perform an action on the target using the robot's actuators. We demonstrate working examples on MakerBot 3D printer that could move a stylus to perform multi-touch gestures on touchscreen to automate tasks such as swipe-to unlock, playing a virtual piano, and playing the Angry Bird game. A wide range of automation possibilities are made viable using a simple scripting language based on images of UI components. The benefits of our approach are: generalizability, instrumentation-free, and high-level programming abstraction.
THAW: tangible interaction with see-through augmentation for smartphones on computer screens BIBAFull-Text 55-56
  Sang-won Leigh; Philipp Schoessler; Felix Heibeck; Pattie Maes; Hiroshi Ishii
In this paper, we present a novel interaction system that allows a collocated large display and small handheld devices to seamlessly work together. The smartphone acts both as a physical interface and as an additional graphics layer for near-surface interaction on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone's back-facing camera. The proposed technique can be implemented on existing devices without the need for additional hardware.
Projectron mapping: the exercise and extension of augmented workspaces for learning electronic modeling through projection mapping BIBAFull-Text 57-58
  Yoh Akiyama; Homei Miyashita
There has been research using software simulations to support the learning of electronic modeling by beginners. There have also been systems to extend workspaces and support electronic modeling on tabletop interfaces. However, in the case of software-based circuit operation, as it is not possible to operate the actual elements, the feeling of actually moving the elements is lacking. For this reason, we are proposing a system that extends the sense of reality in software simulators through the use of projection mapping. This will make it possible to actually give the impression of moving the elements by using a software simulator, and to achieve both high speed and a sense of reality through trial and error.
Extension sticker: a method for transferring external touch input using a striped pattern sticker BIBAFull-Text 59-60
  Kunihiro Kato; Homei Miyashita
A method for transferring external touch input is proposed by partially attaching a sticker to a touch-panel display. The touch input area can be extended by printing striped patterns using a conductive ink and attaching them to overlap with a portion of a touch-panel display. Even if the user does not touch the touch panel directly, a touch event can be generated by touching the stripes at an arbitrary point corresponding to the touched area. Thus, continuous touch input can be generated, such as a scrolling operation without interruption. This method can be applied to a variety of devices including PCs, smartphones, and wearable devices. In this paper, we present several different examples of applications, including a method for extending control areas outside of the touch panel, such as the side or back of a smartphone.
LiveSphere: immersive experience sharing with 360 degrees head-mounted cameras BIBAFull-Text 61-62
  Shunichi Kasahara; Shohei Nagai; Jun Rekimoto
Sharing full immersive experience in real-time has been the one of ultimate goals of telecommunication. Possible application can include various applications such as entertainment, sports viewing, education, social network and professional assistance. Recent head-worn wearable camera enables to shoot the first person video, however, view of angle is limited with the head direction of the person who is wearing, and also captured video is shaky that makes us dizzy. We propose LiveSphere, immersive experience sharing system with wearable camera headgear that provide 360 degrees spherical images of the user's surrounding environment. LiveSphere system performs spherical video stabilization and transmits it to other users, so that they are enable to view shared video comfortably and also look around at the scene from a different view angle independently from the first person. In this note, we explain the overview of the LiveSphere system implementation, stabilization and viewing experience.
Nishanchi: CAD for hand-fabrication BIBAFull-Text 63-64
  Pragun Goyal; Joseph Paradiso; Pattie Maes
We present Nishanchi, a position and orientation aware handheld inkjet printer which can be used to transfer the reference marks from CAD to the workpiece for use in manual fabrication workflows. Nishanchi also has a digitizing tip that can be used to input features about the workpiece to a computer model. By allowing for this two-way exchange of information from CAD to a nonconcormal workpiece, we believe that Nishanchi might help make inclusion of CAD in manual fabrication workflows more seamless.
Enhancing virtual immersion through tactile feedback BIBAFull-Text 65-66
  Mounia Ziat; Taylor Rolison; Andrew Shirtz; Daniel Wilbern; Carrie Anne Balcer
The lack of tangibility while interacting with virtual objects can be compensated by adding haptic and/or tactile devices or actuators to enhance the user experience. In this demonstration, we present two scenarios that consist of perceiving moving objects on the human body (insects) and feeling physical sensations of virtual thermal objects.
Ubisonus: spatial freeform interactive speakers BIBAFull-Text 67-68
  Yoshio Ishiguro; Eric Brockmeyer; Alex Rothera; Ali Israr
We present freeform interactive speakers for creating spatial sound experiences from a variety of surfaces. Traditional surround sound systems are widely used and consist of multiple electromagnetic speakers that create point sound sources within a space. Our proposed system creates directional sound and can be easily embedded into architecture, furniture and many everyday objects. We use electrostatic loudspeaker technology made from thin, flexible, lightweight and low cost materials and can be of different size and shape. In this demonstration we will show various configurations such as single speaker, speaker array and tangible speakers for playful and exciting interactions with spatial sounds. This is an example of new possibilities for the design of various interactive surfaces.


Eyes-free text entry interface based on contact area for people with visual impairment BIBAFull-Text 69-70
  Taedong Goh; Sang Woo Kim
We developed an eyes-free text entry interface using contact area to determine pressed state for mobile device with touchscreen. The interface gives audio feedback for a touched character similar to VoiceOver of iPhone, but audio feedbacks of two simultaneous touches are considered. A desired character is entered by pressing once. Independent entry of two fingers can reduce movement distance for searching a character. Whole interaction occurs in touched states, additional tactile feedback can be augmented.
Third surface: an augmented world wide web for the physical world BIBAFull-Text 71-72
  Valentin Heun; Kenneth Friedman; Andrew Mendez; Benjamin Reynolds; Kevin Wong; Pattie Maes
The ubiquitous use of Augmented Reality (AR) applications is dependent on an easy way of authoring and using content. Present systems depend on specific authoring tools or content delivery systems that provide a limited amount of freedom and content ownership to the author compared to the possibilities of the World Wide Web (WWW). Third Surface is a system that allows the user to publish and use WWW content saved on personal HTTP servers for augmented reality applications in the physical environment. The contribution of this work is a system that allows a web developer to post location-based augmented reality content and AR marker on one's own HTTP server. A Global Location Service (GLS) provides a browsing application with location-based URLs that link the browsing application to content, AR markers, and data for the right positioning of content in the augmented reality interface. The Third Surface has three advantages compared to other concepts. It is globally scalable able to millions of users. The interactive possibilities for developers and users are the same as for the WWW. The developers are in charge of their own content distribution.
Tactile cue presentation for vocabulary learning with keyboard BIBAFull-Text 73-74
  Daichi Ogawa; Sakiko Ikeno; Ryuta Okazaki; Taku Hachisu; Hiroyuki Kajimoto
This paper presents the results of a pilot experiment observing the effect of tactile cues on vocabulary learning. Considering that we generally memorize words by associating them with various cues, we designed a tactile cue presentation device that aids vocabulary learning by applying vibrations to the finger that is associated with the next key to press when typing on a keyboard. Experiments comparing tactile and visual cues indicated that tactile cues can significantly improve long-term retention of vocabulary after one week.
Trainer: a motion-based interactive game for balance rehabilitation training BIBAFull-Text 75-76
  Guanyun Wang; Ye Tao; Dian Yu; Chuan Cao; Hongyu Chen; Cheng Yao
In physiotherapy, the traditional approach of using fixed aids to train patients to keep their balance is often ineffective, due to the tendency of people to lose interest in the training or to lose confidence in their ability to finish the training. A Trainer system is proposed on traditional physiotherapy treatment methods to allow patients to play qualified and immersive games with a mobile aid. Using RF localization and self-balancing technology, the system allows patients to control a vehicle with their sense of balance. This platform provides a series of game feedback interface which involves part-body motion in sitting manipulation therapy to make the rehabilitation more flexible and more effective. This paper reports the designing and the control of the Trainer, the experimental evaluations of the performance of system, as well as an exploration of the future work in detail. Our work is intended to improve the patient experience of the physiotherapy rehabilitation using games with instinctive ways of controlling mobile instruments.
SenseGlass: using google glass to sense daily emotions BIBAFull-Text 77-78
  Javier Hernandez; Rosalind W. Picard
For over a century, scientists have studied human emotions in laboratory settings. However, these emotions have been largely contrived -- elicited by movies or fake "lab" stimuli, which tend not to matter to the participants in the studies, at least not compared with events in their real life. This work explores the utility of Google Glass, a head-mounted wearable device, to enable fundamental advances in the creation of affect-based user interfaces in natural settings.
A text entry technique for wrist-worn watches with tiny touchscreens BIBAFull-Text 79-80
  Hyeonjoong Cho; Miso Kim; Kyeongeun Seo
We consider a text entry technique for wrist-worn watches with inch-scale touchscreens. Most of the watches which are commercially available, for example, Galaxy Gear, Omate, etc., have around 1.5-inch touchscreens that is too small for the shrinked Qwerty keyboard. Moreover, the virtual button-based techniques determine input-letters by distinguishing touched locations on touchscreens which continuously demands a user to carefully touch certain locations. Thus, they are not suitable to tiny-touchscreen devices in mobile environment. Instead, the proposed text entry technique allows a user to touch almost anywhere on the touchscreen for text entry by determining input-letters based on drag direction regardless of touched location. We implemented the proposed method on a commercial watch with 1.54-inch touchscreen for validating its feasibility.
Understanding the design of a flying jogging companion BIBAFull-Text 81-82
  Florian Mueller; Matthew Muirhead
Jogging can offer many health benefits, and mobile phone apps have recently emerged that aim to support the jogging experience. We believe that jogging is an embodied experience, and therefore present a contrasting approach to these existing systems by arguing that any supporting technology should also take on an embodied approach. In order to exemplify this approach, we detail the technical specifications of a flying quadcopter that has successfully been used with joggers in order to explore the design of embodied systems to support physical exertion activities. Based on interviews with five joggers running with our system, we present preliminary insights about the experience of jogging with a flying robot. With our work, we hope to inspire and guide designers who are interested in developing embodied systems to support exertion activities.
AirPincher: a handheld device for recognizing delicate mid-air hand gestures BIBAFull-Text 83-84
  Kyeongeun Seo; Hyeonjoong Cho
We propose AirPincher, a handheld device for recognizing delicate mid-air hand gestures. AirPincher is designed to overcome disadvantages of the two kinds of existing hand gesture-aware techniques such as wearable sensor-based and external vision-based. The wearable sensor-based techniques cause cumbersomeness of wearing sensors every time and the external vision-based techniques incur performance dependence on distance between a user and a remote display. AirPincher allows a user to hold the device in one hand and to generate several delicate mid-air finger gestures. The gestures are captured by several sensors proximately embedded into AirPincher. These features help AirPincher avoid the aforementioned disadvantages of the existing techniques. It allows several delicate finger gestures, for example, rubbing a thumb against a middle finger, swiping with a thumb on an index finger, pinching with a thumb and an index finger, etc. Due to the inherent haptic feedback of these gestures, AirPincher eventually supports the eyes-free interaction. To validate AirPincher's feasibility, we implemented two use cases, i.e., controlling a pointing cursor and moving a virtual 3D object on the remote screen.
Contelli: a user-controllable intelligent keyboard for watch-sized small touchscreens BIBAFull-Text 85-86
  Taik Heon Rhee; Kwangmin Byeon; Hochul Shin
Intelligent keyboards aid fast text entry by correcting user's erroneous input, but there is a big problem that a user always has to watch and judge of their suggestion results. Contelli, a user-controllable intelligent keyboard, monitors the duration of each key-tapping, and analyzes the possibility of mistyping only for short-tapped letters. A long-tapped letter is regarded as a precise input and excluded in the process of candidate generation from a lexicon. Using Contelli, a user may actively "control" the intelligent keyboards. S/he may type ordinary words quickly on watch-sized small touchscreens. Also, s/he may input a word as typed without switching off the automatic replacement or performing additional actions for the replaced result. In addition, long-tapping a part of a string reduces the number of replacement candidates, which contributes the more precise word replacement for highly erroneous input typed on small touchscreens.
G-raffe: an elevating tangible block supporting 2.5D interaction in a tabletop computing environment BIBAFull-Text 87-88
  Jungu Sim; Chang-Min Kim; Seung-Woo Nam; Tek-Jin Nam
We present an elevating tangible block, G-raffe, supporting 2.5-dimensional (2.5-D) interaction in a tabletop computing environment. There is a lack of specialized interface devices for tabletop computing environments. G-raffe overcomes the limitation of conventional 2-D interactions inherited from the vertical desktop computing setting. We adopted a rollable metal tape structure to create up and down movements in a small volume of the block. This also becomes a connecting device for a mobile display to be used with the tabletop computer. We report on our design rationale as well as the results of a preliminary user study.
Building implicit interfaces for wearable computers with physiological inputs: zero shutter camera and phylter BIBAFull-Text 89-90
  Tomoki Shibata; Evan M. Peck; Daniel Afergan; Samuel W. Hincks; Beste F. Yuksel; Robert J. K. Jacob
We propose implicit interfaces that use passive physiological input as additional communication channels between wearable devices and wearers. A defining characteristic of physiological input is that it is implicit and continuous, distinguishing it from conventional event-driven action on a keyboard, for example, which is explicit and discrete. By considering the fundamental differences between the two types of inputs, we introduce a core framework to support building implicit interface, such that the framework follows the three key principles: Subscription, Accumulation, and Interpretation of implicit inputs. Unlike a conventional event driven system, our framework subscribes to continuous streams of input data, accumulates the data in a buffer, and subsequently attempts to recognize patterns in the accumulated data -- upon request from the application, rather than directly in response to the input events. Finally, in order to embody the impacts of implicit interfaces in the real world, we introduce two prototype applications for Google Glass, Zero Shutter Camera triggering a camera snapshot and Phylter filtering notifications the both leverage the wearer's physiological state information.
PoliTel: mobile remote presence system that autonomously adjusts the interpersonal distance BIBAFull-Text 91-92
  Masanori Yokoyama; Masafumi Matsuda; Shinyo Muto; Naoyoshi Kanamaru
Mobile Remote Presence (MRP) system that uses a smart device such as smartphone and tablet pc as video conferencing equipment is getting popular. There are varieties of smart devices, and the appearance of a smart device varies from one to another. We assumed that the appropriate interpersonal distance for an MRP system varies depending on the appearance of the smart device. To confirm our assumption, we conducted a preliminary experiment. The result of the experiment suggested that the value of the proper interpersonal distance increases as the video size increases. It is known that the task load of the remote operator of the MRP system increases if the operator is forced to manually control the MRP system to keep the interpersonal distance to the appropriate level, which adversely affects the quality of the communication through MRP. To resolve the problem, we propose PoliTel, a novel MRP system which autonomously adjusts the interpersonal distance according to the appearance of the smart device by controlling the position or video size of MRP, and allows the operator to concentrate more on the conversation with the person facing to the MRP system.
Riding the plane: bimanual, desktop 3D manipulation BIBAFull-Text 93-94
  Jinbo Feng; Zachary Wartell
A bimanual 7 Degree of Freedom (DOF) manipulation technique based on a hybrid 3D cursor driven by the combination of mouse and trackball is presented. This technique allows the user to move the cursor to the target location in 3D scene by following a conceived straight or curved path. In the pilot study, participants could learn the technique in a short time and perform the docking task steadily without physical fatigue.
Hairlytop interface: a basic tool for active interfacing BIBAFull-Text 95-96
  Shuhei Umezu; Masaru Ohkubo; Yoshiharu Ooide; Takuya Nojima
The Hairlytop Interface is a high scalability interface composed of hair-like units called smart hairs. The original version of the smart hair comprised a shape-memory alloy, drive circuits, and a light sensor. Simply placing the smart hair above a light display device enabled each smart hair to be bent and controlled by modulating the intensity of light from the display. Various prototypes of the Hairlytop Interface have been created to show its high flexibility in configuration. This flexibility should help users to develop their own moving interfaces.
Speeda: adaptive speed-up for lecture videos BIBAFull-Text 97-98
  Chen-Tai Kao; Yen-Ting Liu; Alexander Hsu
Increasing the playback speed of lecture videos is a common technique to shorten watching time. This creates challenges when part of the lecture becomes too fast to be discernible, even if the overall playback speed is acceptable. In this paper, we present a speed-up system that preserves lecture clearness in high playback rate. A user test was conducted to evaluate the system. The result indicates that our system significantly improves user's comprehension level.
Structured handoffs in expert crowdsourcing improve communication and work output BIBAFull-Text 99-100
  Alex Embiricos; Negar Rahmati; Nicole Zhu; Michael S. Bernstein
Expert crowdsourcing allows specialized, remote teams to complete projects, often large and involving multiple stages. Its execution is complicated due to communication difficulties between remote workers. This paper investigates whether structured handoff methods, from one worker to the next, improve final product quality by helping the workers understand the input of their tasks and reduce overall integration cost. We investigate this question through 1) a "live" handoff method where the next worker shadows the former via screen sharing technology and 2) a "recorded" handoff, where workers summarize work done for the next, via a screen capture and narration. We confirm the need for a handoff process. We conclude that structured handoffs result in higher quality work, improved satisfaction (especially for workers with creative tasks), improved communication of non-obvious instructions, and increased adherence to the original intent of the project.
LightWeight: wearable resistance visualizer for rehabilitation BIBAFull-Text 101-102
  Zane Cochran; Brianna Tomlinson; Dar-Wei Chen; Kunal Patel
People recovering from arm injuries are often prescribed limits to the amount of strain they can place on their muscles at a given point during the recovery process. However, it is sometimes difficult for them to know when a given activity creates strain in excess of these limits. To inform this process, we have developed a prototype, the LightWeight, and describe it here. The aim of the LightWeight is to inform users of the strain on targeted muscles as the activity occurs, and to display the relationship of that strain to the aforementioned limits. LightWeight is embedded within a compression sleeve that measures muscle strain through conductive fabric and EMG while displaying that information through an intuitive circular LED display.
Push-push: a two-point touchscreen operation utilizing the pressed state and the hover state BIBAFull-Text 103-104
  Jaehyun Han; Sunggeun Ahn; Geehyuk Lee
A drag operation is used for many two-point functions in mouse-based graphical user interfaces (GUIs), but its usage in touchscreen GUIs is limited because it is mainly used for scrolling. We propose Push-Push as a second two-point touchscreen operation that is not in conflict with a drag operation. We implemented three application scenarios and showed how Push-Push can be used effectively for other two-point functions while overlapping drag operations are used for scrolling.
Slack-scroll: sharing sliding operations among scrolling and other GUI functions BIBAFull-Text 105-106
  Eunhye Youn; Geehyuk Lee
Sliding is one of the basic touchscreen operations, but is mainly used for scrolling in mobile touchscreen GUIs. As a way to share sliding operations among scrolling and other GUI functions, we propose Slack-Scroll. We implemented two application scenarios of Slack-Scroll, and asserted their feasibility in a user study. All participants could accept and adapt well to the new techniques enabled by Slack-Scroll.
FlickBoard: enabling trackpad interaction with automatic mode switching on a capacitive-sensing keyboard BIBAFull-Text 107-108
  Ying-Chao Tung; Ta-Yang Cheng; Neng-Hao Yu; Mike Y. Chen
We present FlickBoard, which combines a trackpad and a keyboard into the same interaction area to reduce hand movement between separate keyboards and trackpads. It supports automatic input mode detection and switching (ie. trackpad vs keyboard mode) without explicit user input. We developed a prototype by embedding a 58x20 capacitive sensing grid into a soft keyboard cover, and uses machine learning to distinguish between moving a cursor (trackpad mode) and entering text (keyboard mode). Our prototype has a thin profile and can be placed over existing keyboards.
Traceband: locating missing items by visual remembrance BIBAFull-Text 109-110
  Farshid Tavakolizadeh; Jiawei Gu; Bahador Saket
Finding missing items has always been troublesome. To tackle the hassle, several systems have been suggested; yet they are inflexible due to excessive setup time, operational cost, and effectiveness. We present Traceband; a lightweight and portable bracelet, which keeps track of every targeted commonly used object that a user interacts with. Users can find the location of missing items via a web-based software portal.
Tangential force input for touch panels using bezel-aligned elastic pillars and a transparent sheet BIBAFull-Text 111-112
  Yuriko Nakai; Shinya Kudo; Ryuta Okazaki; Hiroyuki Kajimoto
This research aims to enable tangential force input for touch panels by measuring the tangential force. The system is composed of a plastic sheet on a touch panel, urethane pillars on the panel that are aligned at the four corners of the bezel, and a case on top of the pillars. When the sheet moves with a finger, the pillars deform so that a tangential force can be obtained by measuring the movement of the finger. We evaluated the method and found that the system showed realistic force sensing accuracy in any direction. This input method will enable development of new applications for touch panels such as using any part of the touch panel surface as joysticks, or modeling virtual objects by deforming them with the fingers.
Ethereal: a toolkit for spatially adaptive augmented reality content BIBAFull-Text 113-114
  Gheric Speiginer; Blair MacIntyre
In this poster, we describe a framework and toolkit (Ethereal) for creating spatially adaptive content based on complex spatial and visual metrics in augmented reality, and demonstrate our approach with an illustrative example.
Reaching targets on discomfort region using tilting gesture BIBAFull-Text 115-116
  Youli Chang; Sehi L'Yi; Jinwook Seo
We present three novel methods to facilitate one hand targeting at discomfort regions on a mobile touch screen using tilting gestures; TiltSlide, TiltReduction, and TiltCursor. We conducted a controlled user study to evaluate them in terms of their performance and user preferences by comparing them with other related methods, i.e. ThumbSpace, Edge Triggered with Extendible Cursor (ETEC), and Direct Touch (directly touching with a thumb). All three methods showed better performance than ThumbSpace in terms of speed and accuracy. Moreover, TiltReduction led users to require less thumb/grip movement than Direct Touch while showing comparable performance in speed and accuracy.
Integrating optical waveguides for display and sensing on pneumatic soft shape changing interfaces BIBAFull-Text 117-118
  Lining Yao; Jifei Ou; Daniel Tauber; Hiroshi Ishii
We introduce the design and fabrication process of integrating optical fiber into pneumatically driven soft composite shape changing interfaces. Embedded optical waveguides can provide both sensing and illumination, and add one more building block to the design of designing soft pneumatic shape changing interfaces.
Towards responsive retargeting of existing websites BIBAFull-Text 119-120
  Gilbert Louis Bernstein; Scott Klemmer
Websites need to be displayed on a panoply of different devices today, but most websites are designed with fixed widths only appropriate to browsers on workstation computers. We propose to programmatically rewrite websites into responsive formats capable of adapting to different device display sizes. To accomplish this goal, we cast retargeting as a cross-compilation problem. We decompose existing HTML pages into boxes (lexing), infer hierarchical structure between these boxes (parsing) and finally generate parameterized layouts from the hierarchical structure (code generation). This document describes preliminary work on ReMorph, a prototype 'retargeting as cross-compilation' system.
bioPrint: an automatic deposition system for bacteria spore actuators BIBAFull-Text 121-122
  Jifei Ou; Lining Yao; Clark Della Silva; Wen Wang; Hiroshi Ishii
We propose an automatic deposition method of bacteria spores, which deform thin soft materials under environmental humidity change. We describe the process of two-dimensional printing the spore solution as well as a design application. This research intends to contribute to the understanding of the control and pre-programming the transformation of future interfaces.
FatBelt: motivating behavior change through isomorphic feedback BIBAFull-Text 123-124
  Trevor Pels; Christina Kao; Saguna Goel
The ultimate problem of systems facilitating long-term health and fitness goals is the disconnect between an action and its eventual consequence. As the long-term effects of behavior change are not immediately apparent, it can be hard to motivate the desired behavior over a long period of time. As such, we introduce a system that uses physical feedback through a wearable device that inflates around the stomach as a response to calorie overconsumption, simulating the long-term weight-gain associated with over-eating. We tested a version of this system with 12 users over a period of 2 days, and found a significant decrease in consumption over a baseline period of the same length, suggesting that through physical response, FatBelt moved calorie intake drastically closer to participants' goals. Interviews with participants indicate that isomorphism to the long-term consequences was a large factor in the system's efficacy. In addition, the wearable, physical feedback was perceived as an extension of the user's body, an effect with great emotional consequences.
M-gesture: geometric gesture authoring framework for multi-device gestures using wearable devices BIBAFull-Text 125-126
  Ju-Whan Kim; Tek-Jin Nam
Wearable devices and mobile devices have great potential to detect various body motions as they are attached to different body parts. We present M-Gesture, a geometric gesture authoring framework using multiple wearable devices. We implemented physical metaphor, geometric gesture language, and continuity in spatial layout for easy and clear gesture authoring. M-Gesture demonstrates the use of geometric notation as an intuitive gesture language.
Eugenie: gestural and tangible interaction with active tokens for bio-design BIBAFull-Text 127-128
  Casey Grote; Evan Segreto; Johanna Okerlund; Robert Kincaid; Orit Shaer
We present a case study of a tangible user interface that implements novel interaction techniques for the construction of complex queries in large data sets. Our interface, Eugenie, utilizes gestural interaction with active physical tokens and a multi-touch interactive surface to aid in the collaborative design process of synthetic biological circuits. We developed new interaction techniques for navigating large hierarchical data sets and for exploring a combinatorial design space. The goal of this research is to study the effect of gestural and tangible interaction with active tokens on sense-making throughout the bio-design process.
OverCode: visualizing variation in student solutions to programming problems at scale BIBAFull-Text 129-130
  Elena L. Glassman; Jeremy Scott; Rishabh Singh; Philip Guo; Robert Miller
In MOOCs, a single programming exercise may produce thousands of solutions from learners. Understanding solution variation is important for providing appropriate feedback to students at scale. The wide variation among these solutions can be a source of pedagogically valuable examples, and can be used to refine the autograder for the exercise by exposing corner cases. We present OverCode, a system for visualizing and exploring thousands of programming solutions. OverCode uses both static and dynamic analysis to cluster similar solutions, and lets instructors further filter and cluster solutions based on different criteria. We evaluated OverCode against a non-clustering baseline in a within-subjects study with 24 teaching assistants, and found that the OverCode interface allows teachers to more quickly develop a high-level view of students' understanding and misconceptions, and to provide feedback that is relevant to more students.