| Neat: a set of flexible tools and gestures for layout tasks on interactive displays | | BIBA | Full-Text | 1-10 | |
| Mathias Frisch; Ricardo Langner; Raimund Dachselt | |||
| Creating accurate layouts of graphical objects is an important activity in many graphics applications, such as design tools, presentation software or diagram editors. In this paper, we are contributing Natural and Effective Layout Techniques (Neat). The system provides a consistent set of multi-touch tools and gestures for aligning and distributing graphical objects on interactive surfaces. NEAT explicitly considers expert requirements and supports a rich and consistent set of layout functions. Amongst others, it minimizes visual distraction by layout tools, combines separate steps of interaction to compound ones and allows effective interaction by combining multi-touch and pen input. Furthermore, Neat provides a set of bimanual gestures for achieving layout tasks in a quick and effective way without explicitly invoking any tools. From initial expert user feedback we derive several principles for layout tools on interactive displays. | |||
| Pointable: an in-air pointing technique to manipulate out-of-reach targets on tabletops | | BIBA | Full-Text | 11-20 | |
| Amartya Banerjee; Jesse Burstyn; Audrey Girouard; Roel Vertegaal | |||
| Selecting and moving digital content on interactive tabletops often involves accessing the workspace beyond arm's reach. We present Pointable, an in-air, bimanual perspective-based interaction technique that augments touch input on a tabletop for distant content. With Pointable, the dominant hand selects remote targets, while the non-dominant hand can scale and rotate targets with a dynamic C/D gain. We conducted 3 experiments; the first showed that pointing at a distance using Pointable has a Fitts' law throughput comparable to that of a mouse. In the second experiment, we found that Pointable had the same performance as multi-touch input in a resize, rotate and drag task. In a third study, we observed that when given the choice, over 75% of participants preferred to use Pointable over multi-touch for target manipulation. In general, Pointable allows users to manipulate out-of-reach targets, without loss of performance, while minimizing the need to lean, stand up, or involve collocated collaborators. | |||
| Designing user-, hand-, and handpart-aware tabletop interactions with the TouchID toolkit | | BIBA | Full-Text | 21-30 | |
| Nicolai Marquardt; Johannes Kiemer; David Ledo; Sebastian Boring; Saul Greenberg | |||
| Recent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which part of the hand, (2) which side of the hand, and (3) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its low-level programming model hinders the way developers could rapidly explore new kinds of user- and handpart-aware interactions. We contribute the TouchID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TouchID provides an easy-to-use event-driven API as well as higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TouchID's expressiveness by showing how we developed a suite of techniques that exploits knowledge of which handpart is touching the surface. | |||
| Eye-Shield: protecting bystanders from being blinded by mobile projectors | | BIBA | Full-Text | 31-34 | |
| Bonifaz Kaufmann; Martin Hitz | |||
| This paper introduces Eye-Shield, a mobile projector-camera prototype designed for the purpose of protecting people from being accidently blinded with a handheld projector. Since they might be used regularly in public space, mobile projectors can be seen as an intrusive technology. The emitted projector light can easily annoy bystanders which might lead to negative social consequences, particularly, if the light shines directly into one's face. The proposed prototype uses a camera attached to a mobile projector to detect faces within the projection area in order to block out the part of the image that would otherwise be projected onto a human face. | |||
| FuSA touch display: a furry and scalable multi-touch display | | BIBA | Full-Text | 35-44 | |
| Kosuke Nakajima; Yuichi Itoh; Takayuki Tsukitani; Kazuyuki Fujita; Kazuki Takashima; Yoshifumi Kitamura; Fumio Kishino | |||
| We propose a furry and scalable multi-touch display called the "FuSA2 Touch Display." The furry type of tactile sensation of this surface affords various interactions such as stroking or clawing. The system utilizes plastic fiber optic bundles to realize a furry-type texture. The system can show visual feedback by projection and detects multi-touch input using a diffused illumination technique. We employed the optical feature of plastic fiber optics to integrate the input and output systems into such a simple configuration that the display becomes scalable. We implemented a 24-inch display, evaluated the visual feedback and touch detection features, and found that our implemented display encourages users to interact with it in various actions. | |||
| Optical pressure sensing for tangible user interfaces | | BIBA | Full-Text | 45-48 | |
| Fabian Hennecke; Franz Berwein; Andreas Butz | |||
| In this paper we present a low cost pressure sensing method for Tangible User Interface (TUI) objects on interactive surfaces, using conventional FTIR and DI tracking. While current TUIs use optical tracking for an object's position and orientation, they rely on mechanical or electric enhancements to enable sensing of other input parameters such as pressure. Our approach uses dedicated marker pads for pressure sensing embedded into an optical marker pattern for position and orientation tracking. Two different marker designs allow different precision levels: Number of Contacts (NoC) allows click sensing and Area of Contact (AoC) enables continuous pressure sensing. We describe the working principles of the marker patterns and the construction of the corresponding tangible objects. We have tested continuous pressure sensing in a preliminary user study and will also discuss limitations of our approach. | |||
| KinectTouch: accuracy test for a very low-cost 2.5D multitouch tracking system | | BIBA | Full-Text | 49-52 | |
| Andreas Dippon; Gudrun Klinker | |||
| We present a simple solution for a touch detection system on any display at a very low cost. By using a Microsoft Kinect, we can detect fingers and objects on and above a display. We conducted a user study in order to evaluate the accuracy of this system compared to the accuracy of a capacitive touch monitor. The results show, that the system can't compete with an integrated system yet, but works well enough to be used as a touch detection system for large displays. | |||
| Augmenting touch interaction through acoustic sensing | | BIBA | Full-Text | 53-56 | |
| Pedro Lopes; Ricardo Jota; Joaquim A. Jorge | |||
| Recognizing how a person actually touches a surface has generated a strong interest within the interactive surfaces community. Although we agree that touch is the main source of information, unless other cues are accounted for, user intention might not be accurately recognized. We propose to expand the expressiveness of touch interfaces by augmenting touch with acoustic sensing. In our vision, users can naturally express different actions by touching the surface with different body parts, such as fingers, knuckles, fingernails, punches, and so forth -- not always distinguishable by touch technologies but recognized by acoustic sensing. Our contribution is the integration of touch and sound to expand the input language of surface interaction. | |||
| Enhanced interaction with physical toys | | BIBA | Full-Text | 57-60 | |
| Yasushi Matoba; Toshiki Sato; Hideki Koike | |||
| We developed an entertainment system that enhances the experience of playing with tops by employing augmented reality technologies. A tabletop system tracks the positions and rotation speeds of multiple tops with a high-speed camera and displays audio and visual effects. A hand-held device, called an accelerator, enables virtual and physical contact between the user and top by allowing the user to move and accelerate the top and obtain force feedback from the top. We proposed a top battle game in which the player will interact with these tops. | |||
| Interactive phone call: synchronous remote collaboration and projected interactive surfaces | | BIBA | Full-Text | 61-70 | |
| Christian Winkler; Christian Reinartz; Diana Nowacka; Enrico Rukzio | |||
| Smartphones provide large amounts of personal data, functionalities, and apps and make a substantial part of our daily communication. But during phone calls the phone cannot be used much beyond voice communication and does not offer support for synchronous collaboration. This is owed to the fact that first, despite the availability of alternatives, the phone is typically held at one's ear; and second that the small mobile screen is less suited to be used with existing collaboration software. This paper presents a novel in-call collaboration system that leverages projector phones as they provide a large display that can be used while holding the phone to the ear to project an interactive interface anytime and anywhere. The system uses a desktop metaphor user interface and provides a private and a shared space, live mirroring of the shared space and user defined access rights to shared content. We evaluated the system in a comparative user study. The results of the user study highlight the general benefits of synchronous in-call collaboration and in particular the advantages of the projected display and our developed concepts. Our findings inform future designers of synchronous remote collaboration software for interactive surfaces. | |||
| HATs: interact using height-adjustable tangibles in tabletop interfaces | | BIBA | Full-Text | 71-74 | |
| Haipeng Mi; Masanori Sugimoto | |||
| We present Height-Adjustable Tangibles (HATs) for table-top interaction. HATs are active tangibles with 4 degrees of freedom that are capable of moving, rotating, and changing height. By adding height as an additional dimension for manipulation and representation, HATs offer more freedom to users than ordinary tangibles. HATs support bidirectional interaction, enabling them to reflect changes in the digital model via active visual feedback and to assist users via haptic feedback. A number of scenarios for using HATs are proposed, including interaction with complex and dependent models and applying HATs as tangible indicator widgets. We then introduce the implementation of HAT prototypes, for which we utilize motor-driven potentiometers to realize bidirectional interaction via the height dimension. | |||
| TaPS widgets: interacting with tangible private spaces | | BIBA | Full-Text | 75-78 | |
| Max Möllers; Jan Borchers | |||
| Interacting with private data is important in multi-user table-top systems, but hard to implement with current technology. Existing approaches usually involve wearable devices such as shutter glasses or head-mounted displays that are cumbersome to wear. We present TaPS, lightweight transparent widgets that only pass light coming from a particular direction to shield the content beneath them from other users, creating Tangible Private Spaces. TaPS widgets use low-cost hardware to provide tangible privacy controls to interactive tabletops. Informal studies indicate that TaPS widgets enable users to successfully move documents between public and private tabletop spaces without compromising privacy and allow for secret data entry. | |||
| Palm touch panel: providing touch sensation through the device | | BIBA | Full-Text | 79-82 | |
| Shogo Fukushima; Hiroyuki Kajimoto | |||
| We present a novel touch sensitive handheld device, called Palm Touch Panel, which provides electro-tactile feedback on the back of the device thus simulating the sensation of being able to touch the user's palm directly through the device. Users hold the mobile device, which has an electro-tactile display attached at the back. When a finger touches the visual cues on the front screen panel, such as a button or an icon, the electro-tactile display at the back transmits the unique tactile sensation associated with this behavior of the cues to the palm of the hand. As a result, we speculate that the user can manipulate visual information with less visual attention, or even potentially in an eyes-free manner. In this paper we discuss the creation of this unique mobile device that allows the palm to be used for tactile feedback, thus enhancing the touch screen experience. | |||
| Enhancing naturalness of pen-and-tablet drawing through context sensing | | BIBA | Full-Text | 83-86 | |
| Minghui Sun; Xiang Cao; Hyunyoung Song; Shahram Izadi; Hrvoje Benko; Francois Guimbretiere; Xiangshi Ren; Ken Hinckley | |||
| Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios. | |||
| Tangible actions | | BIBA | Full-Text | 87-96 | |
| Dustin Freeman; Ravin Balakrishnan | |||
| We present Tangible Actions, an ad-hoc, just-in-time, visual programming by example language designed for large multitouch interfaces. With the design of Tangible Actions, we contribute a continually-created system of programming tokens that occupy the same space as the objects they act on. Tangible Actions are created by the gestural actions of the user, and they allow the user to reuse and modify their own gestures with a lower interaction cost than the original gesture. We implemented Tangible Actions in three different tabletop applications, and ran an informal evaluation. While we found that study participants generally liked and understood Tangible Actions, having the objects and the actions co-located can lead to visual and interaction clutter. | |||
| RealTimeChess: lessons from a participatory design process for a collaborative multi-touch, multi-user game | | BIBA | Full-Text | 97-106 | |
| Jonathan Chaboissier; Tobias Isenberg; Frédéric Vernier | |||
| We report on a long-term participatory design process during which we designed and improved RealTimeChess, a collaborative but competitive game that is played using touch input by multiple people on a tabletop display. During the design process we integrated concurrent input from all players and pace control, allowing us to steer the interaction along a continuum between high-paced simultaneous and low-paced turn-based gameplay. In addition, we integrated tutorials for teaching interaction techniques, mechanisms to control territoriality, remote interaction, and alert feedback. Integrating these mechanism during the participatory design process allowed us to examine their effects in detail, revealing for instance effects of the competitive setting on the perception of awareness as well as territoriality. More generally, the resulting application provided us with a testbed to study interaction on shared tabletop surfaces and yielded insights important for other time-critical or attention-demanding applications. | |||
| Adaptive personal territories for co-located tabletop interaction in a museum setting | | BIBA | Full-Text | 107-110 | |
| Daniel Klinkhammer; Markus Nitsche; Marcus Specht; Harald Reiterer | |||
| In this paper, we address the problem of designing for participation and parallel interaction with a walk-up-and-use tabletop system in a public exhibition environment. Motivated by the work practice of territoriality, we implement a novel, tabletop-integrated multi-user tracking system that provides data on a user's location and movement. Based on this robust hardware and software implementation, we present an interaction design that assigns a visually separated display space to each user, the space serving them as a personal territory. These territories can serve as affordances for initiating interactions; most notably they can support the multi-user coordination process during parallel co-located information exploration, which has been observed in our preliminary evaluation. | |||
| Triangle cursor: interactions with objects above the tabletop | | BIBA | Full-Text | 111-119 | |
| Sven Strothoff; Dimitar Valkov; Klaus Hinrichs | |||
| Extending the tabletop display to the third dimension using a stereoscopic projection offers the possibility to improve applications by using the volume above the table surface. The combination of multi-touch input and stereoscopic projection usually requires an indirect technique to interact with objects above the tabletop, as touches can only be detected on the surface. Triangle Cursor is a 3D interaction technique that allows specification of a 3D position and yaw rotation above the interactive tabletop. It was designed to avoid occlusions that disturb the stereoscopic perception. While Triangle Cursor uses an indirect approach, the position, the height above the surface and the yaw rotation can be controlled simultaneously, resulting in a 4 DOF manipulation technique. We have evaluated Triangle Cursor in an initial user study and compared it to a related existing technique in a formal user study. Our experiments show that users were able to perform all tasks significantly faster with our technique without loosing any precision. Most of the subjects considered the technique easy to use and satisfying. | |||
| Design of unimanual multi-finger pie menu interaction | | BIBA | Full-Text | 120-129 | |
| Nikola Banovic; Frank Chun Yat Li; David Dearman; Koji Yatani; Khai N. Truong | |||
| Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces. | |||
| Applying mobile device soft keyboards to collaborative multitouch tabletop displays: design and evaluation | | BIBA | Full-Text | 130-139 | |
| Sungahn Ko; KyungTae Kim; Tejas Kulkarni; Niklas Elmqvist | |||
| We present an evaluation of text entry methods for tabletop displays given small display space allocations, an increasingly important design constraint as tabletops become collaborative platforms. Small space is already a requirement of mobile text entry methods, and these can often be easily ported to tabletop settings. The purpose of this work is to determine whether these mobile text entry methods are equally useful for tabletop displays, or whether there are unique aspects of text entry on large, horizontal surfaces that influence design. Our evaluation consists of two studies designed to elicit differences between the mobile and tabletop domains. Results show that standard soft keyboards perform best, even at small space allocations. Furthermore, occlusion-reduction methods like Shift do not yield significant improvements to text entry; we speculate that this is due to the low ratio of resolution per surface units (i.e., DPI) for current tabletops. | |||
| Exploring physical information cloth on a multitouch table | | BIBA | Full-Text | 140-149 | |
| Kimberly Mikulecky; Mark Hancock; John Brosz; Sheelagh Carpendale | |||
| We expand multitouch tabletop information exploration by placing 2D information on a physically-based cloth in a shallow 3D viewing environment. Instead of offering 2D information on a rigid window or screen, we place our information on a soft flexible cloth that can be draped, pulled, stretched, and folded with multiple fingers and hands, supporting any number of information views. Combining our multitouch flexible information cloth with simple manipulable objects provides a physically-based information viewing environment that offers similar advantages to complex detail-in-context viewing. Previous detail-in-context views can be re-created by draping cloth over virtual objects in this physics simulation, thereby approximating many of the existing techniques by providing zoomed-in information in the context of zoomed-out information. These detail-in-context views are approximated because, rather than use distortion, the draped cloth naturally drapes and folds showing magnified regions within a physically understandable context. In addition, the information cloth remains flexibly responsive, allowing one to tweak, unfold, and smooth out regions as desired. | |||
| ZoomPointing revisited: supporting mixed-resolution gesturing on interactive surfaces | | BIBA | Full-Text | 150-153 | |
| Matei Negulescu; Jaime Ruiz; Edward Lank | |||
| In this work, we explore the design of multi-resolution input on multi-touch devices. We devised a refined zooming technique named Offset, where the target is set at a location offset from the non-dominant hand while the dominant hand controls the direction and magnitude of the expansion. Additionally, we explored the use of non-persistent transformations of the view in our design. A think-aloud study that compared our design to a bimanual widget interaction and the classic pinch-based interaction with a freeform drawing task suggests that Offset offers benefits in terms of performance and degree of control. As well, for the drawing tasks, the transient nature of view transformations appears to impact not only performance, but workflow, focus of interaction, and subjective quality of results by providing a constant overview of the user's task. | |||
| Data analysis on interactive whiteboards through sketch-based interaction | | BIBA | Full-Text | 154-157 | |
| Jeffrey Browne; Bongshin Lee; Sheelagh Carpendale; Nathalie Riche; Timothy Sherwood | |||
| When faced with the task of understanding complex data, it is common for people to work on whiteboards, where they can collaborate with others, brainstorm lists of important questions, and sketch simple visualizations. However, these sketched visualizations seldom contain real data. We address this gap by extending these sketched whiteboard visualizations with the actual data to be analyzed. Guided by an iterative design process, we developed a better understanding of the challenges involved in bringing sketch-based interaction to data analysis. In this work we contribute insights into the design challenges of sketch-based charting, and we present SketchVis, a system that leverages hand-drawn input for exploring data through simple charts. | |||
| Dynamic portals: a lightweight metaphor for fast object transfer on interactive surfaces | | BIBA | Full-Text | 158-161 | |
| Simon Voelker; Malte Weiss; Chat Wacharamanotham; Jan Borchers | |||
| We introduce Dynamic Portals, a lightweight interaction technique to transfer virtual objects across tabletops. They maintain the spatial coherence of objects and inherently align them to the recipients' workspace. Furthermore, they allow the exchange of digital documents among multiple users. A remote view enables users to align their objects at the target location. This paper explores the interaction technique and shows how our concept can also be applied as zoomable viewport and shared workspace. | |||
| Firestorm: a brainstorming application for collaborative group work at tabletops | | BIBA | Full-Text | 162-171 | |
| Andrew Clayphan; Anthony Collins; Christopher Ackad; Bob Kummerfeld; Judy Kay | |||
| The tabletop computer interface has the potential to support idea generation by a group using the brainstorming technique. This paper describes the design and implementation of a table-top brainstorming system. To gain insights into its effectiveness, we conducted a user study which compared our system against a more conventional approach. We analysed the processes and results with the goal of gaining an understanding of the ways a tabletop brainstorming system can support the phases of this activity. We found that our tabletop interface facilitated the creation of more ideas and participants tended to create more categories. We observed that the tabletop provides a useful record of the group processes and this is valuable for reviewing how well a group followed recommended brainstorming processes. Our contributions are a new table-top brainstorming system and insights into the nature of the benefits a tabletop affords for brainstorming and for capturing the processes employed by a group. | |||
| Who did what? Who said that?: Collaid: an environment for capturing traces of collaborative learning at the tabletop | | BIBA | Full-Text | 172-181 | |
| Roberto Martínez; Anthony Collins; Judy Kay; Kalina Yacef | |||
| Tabletops have the potential to provide new ways to support collaborative learning generally and, more specifically, to aid people in learning to collaborate more effectively. To achieve this potential, we need to gain understanding of how to design tabletop environments so that they capture relevant information about collaboration processes so that we can make it available in a form that is useful for learners, their teachers and facilitators. This paper draws upon research in computer supported collaborative learning to establish a set of principles for the design of a tabletop learning system. We then show how these have been used to design our Collaid (Collaborative Learning Aid) environment. Key features of this system are: capture of multi-modal data about collaboration in a tabletop activity using a microphone array and a depth sensor; integration of these data with other parts of the learning system; transforming the data into visualisations depicting the processes that occurred during the collaboration at the table; and sequence mining of the interaction logs. The main contributions of this paper are: our design guidelines to build the Collaid environment and the demonstration of its use in a collaborative concept mapping learning tool applying data mining and visualisations of collaboration. | |||
| Flow of electrons: an augmented workspace for learning physical computing experientially | | BIBA | Full-Text | 182-191 | |
| Bettina Conradi; Verena Lerch; Martin Hommer; Robert Kowalski; Ioanna Vletsou; Heinrich Hussmann | |||
| Physical computing empowers people to design and customize electronic hardware tailored to their individual needs. This often involves "tinkering" with components connections, but due to the intangible nature of electricity, this can be difficult, especially for novices. We use a multistage design process to design, build and evaluate a physical prototyping workspace for novices to learn about real physical computing hardware. The workspace consists of a horizontal surface that tracks physical components like sensors, actuators, and microcontroller boards and augments them with additional digital information in situ. By digitally exploring various means of connecting components, users can experientially learn how to build a functioning circuit and then transition directly to building it physically. In a user study, we found that this system motivates learners by encouraging them and building a sense of competence, while also providing a stimulating experience. | |||
| "Point it, split it, peel it, view it": techniques for interactive reservoir visualization on tabletops | | BIBA | Full-Text | 192-201 | |
| Nicole Sultanum; Sowmya Somanath; Ehud Sharlin; Mario Costa Sousa | |||
| Reservoir engineers rely on virtual representations of oil reservoirs to make crucial decisions relating, for example, to the modeling and prediction of fluid behavior, or to the optimal locations for drilling wells. Therefore, they are in constant pursue of better virtual representations of the reservoir models, improved user awareness of their embedded data, and more intuitive ways to explore them, all ultimately leading to more informed decision making. Tabletops have great potential in providing powerful interactive representation to reservoir engineers, as well as enhancing the flexibility, immediacy and overall capabilities of their analysis, and consequently bringing more confidence into the decision making process. In this paper, we present a collection of 3D reservoir visualization techniques on tabletop interfaces applied to the domain of reservoir engineering, and argue that these provide greater insight into reservoir models. We support our claims with findings from a qualitative user study conducted with 12 reservoir engineers, which brought us insight into our techniques, as well as a discussion on the potential of tabletop-based visualization solutions for the domain of reservoir engineering. | |||
| The eLabBench: an interactive tabletop system for the biology laboratory | | BIBA | Full-Text | 202-211 | |
| Aurélien Tabard; Juan-David Hincapié-Ramos; Morten Esbensen; Jakob E. Bardram | |||
| We present the eLabBench -- a tabletop system supporting experimental research in the biology laboratory. The eLabBench allows biologists to organize their experiments around the notions of activities and resources, and seamlessly roam information between their office computer and the digital laboratory bench. At the bench, biologists can pull digital resources, annotate them, and interact with hybrid (tangible + digital) objects such as racks of test tubes. This paper focuses on the eLabBench's design, and presents three main contributions: First, based on observations we highlight a set of characteristics digital benches should support in a laboratory. Second, we describe the eLabBench, including a simple implementation of activity-based computing for tabletop environments, with support for activity roaming, note-taking, and hybrid objects. Third, we present preliminary feedback of the eLabBench based on a ongoing deployment in a biology laboratory, and propose a design space definition for the design of single-user, work-oriented, tabletop systems. | |||
| Code space: touch + air gesture hybrid interactions for supporting developer meetings | | BIBA | Full-Text | 212-221 | |
| Andrew Bragdon; Rob DeLine; Ken Hinckley; Meredith Ringel Morris | |||
| We present Code Space, a system that contributes touch + air gesture hybrid interactions to support co-located, small group developer meetings by democratizing access, control, and sharing of information across multiple personal devices and public displays. Our system uses a combination of a shared multi-touch screen, mobile touch devices, and Microsoft Kinect sensors. We describe cross-device interactions, which use a combination of in-air pointing for social disclosure of commands, targeting and mode setting, combined with touch for command execution and precise gestures. In a formative study, professional developers were positive about the interaction design, and most felt that pointing with hands or devices and forming hand postures are socially acceptable. Users also felt that the techniques adequately disclosed who was interacting and that existing social protocols would help to dictate most permissions, but also felt that our lightweight permission feature helped presenters manage incoming content. | |||
| Display-adaptive window management for irregular surfaces | | BIBA | Full-Text | 222-231 | |
| Manuela Waldner; Raphael Grasset; Markus Steinberger; Dieter Schmalstieg | |||
| Current projectors can easily be combined to create an everywhere display, using all suitable surfaces in offices or meeting rooms for the presentation of information. However, the resulting irregular display is not well supported by traditional desktop window managers, which are optimized for rectangular screens. In this paper, we present novel display-adaptive window management techniques, which provide semi-automatic placement for desktop elements (such as windows or icons) for users of large, irregularly shaped displays. We report results from an exploratory study, which reveals interesting emerging strategies of users in the manipulation of windows on large irregular displays and shows that the new techniques increase subjective satisfaction with the window management interface. | |||
| Using mobile phones to interact with tabletop computers | | BIBA | Full-Text | 232-241 | |
| Christopher McAdam; Stephen Brewster | |||
| Tabletop computers can be used by several people at the same time, and many are likely to be carrying mobile phones. We examine different ways of performing interactions in this multi-device ecology. We conducted a study into the use of a phone as a controller for a dial manipulation task, comparing three different forms of interaction: direct touch, using the phone as a general-purpose tangible controller on a tabletop computer and manipulating the dial directly on the phone's screen. We also examined user performance for these interactions both with and without tactile feedback from the phone. We found interacting on the phone itself fastest overall, with tactile feedback improving performance. We also show a range of concerns that users have about using their phone as a controller. The results suggest that using a phone and table together can sometimes be better than using the table alone. | |||
| Anamorphicons: an extended display with a cylindrical mirror | | BIBA | Full-Text | 242-243 | |
| Chihiro Suga; Itiro Siio | |||
| We developed an interactive system which takes the technique of Anamorphosis, with a 2D display and a cylindrical mirror. In this system, a distorted image is shown on a flat panel display or tabletop surface, and the original image will appear on the cylindrical mirror when a user puts it on the display. By detecting the position and rotation of the cylinder, the system provides interaction between the user and the image on the cylinder. In our current prototype, an iPad screen and its multi-touch display is used to detect the cylinder. | |||
| uTable: a seamlessly tiled, very large interactive tabletop system | | BIBA | Full-Text | 244-245 | |
| Yongqiang Qin; Chun Yu; Jie Liu; Yuntao Wang; Yue Shi; Zhouyue Su; Yuanchun Shi | |||
| We present uTable, a very large horizontal interactive surface which accommodates up to ten people sitting around and interacting in parallel. We identify the key aspects for building such large interactive tabletops and discuss the pros and cons of potential techniques. After several rounds of trials, we finally chose tiled rear projection for building the very large surface and DI solution for detecting touch inputs. We also present a set of techniques to narrow the interior bezels, uniform the color/brightness of the surface and handle multiple streams of inputs. Finally uTable achieves a good overall performance in terms of display effect and input capability. | |||
| Interactive sensemaking in authorship networks | | BIBA | Full-Text | 246-247 | |
| Bram Vandeputte; Erik Duval; Joris Klerkx | |||
| This paper describes research on rich opportunities for novel interaction on large multitouch tables to assist researchers. We have designed, developed and evaluated ResearchTable which provides an interactive visualization of (co-)authorship networks. Our evaluation shows that users discovered relevant researchers and papers that they were unaware of. | |||
| Reminiscence Park Interface: personal spaces to listen to songs with memories and diffusions and overlaps of their spaces | | BIBA | Full-Text | 248-249 | |
| Seiko Myojin; Masumi Shimizu; Mie Nakatani; Shuhei Yamada; Hirokazu Kato; Shogo Nishida | |||
| We propose Reminiscence Park Interface. This interface gives the personal spaces for listening to the favorite songs by using the original music boxes. And our interface also visualizes the diffusions and the overlaps of the users' spaces by computer graphics on the original resonance table. The users can enjoy listening to their favorite songs alone or with somebody. | |||
| Analysis of pointing motions by introducing a joint model for supporting embodied large-surface presentation | | BIBA | Full-Text | 250-251 | |
| Yusuke Shigeno; Michiya Yamamoto; Tomio Watanabe | |||
| The importance of utilizing the advantages of embodiment in presentation has often been pointed out. In recent years, because of the increase in the size of screens and displays, the realization of a large-surface presentation by arranging many screens and displays on the wall of a room is expected. In this study, we precisely analyzed pointing motions by introducing a joint model and in freestyle movement. First, we introduced a joint model and proposed a new model for the calculation of a joint center. Next, we calculated the joint centers of a shoulder, an elbow, and a wrist. Then, we performed a pointing experiment in front of a large-surface and analyzed the pointing motions. | |||
| Spatial connectedness of information presentation for safety training in chemistry experiments | | BIBA | Full-Text | 252-253 | |
| Akifumi Sokan; Hironori Egi; Kaori Fujinami | |||
| This paper focuses on a principle in presenting safety-related message on the table for a chemistry experiment support system. It is important for students to acquire applied skills to conduct experiments without the system in the future, as well as avoiding a danger in front of them.. We conducted an eye-gaze analysis to find out the effect of spatial connectedness of a message to a hazardous object. The results suggest a design principle against the strength of spatial connectedness in terms of visual search range, possible reaction time and the number of interpretations against a presented message. | |||
| Interaction design of 2D/3D map navigation on wall and tabletop displays | | BIBA | Full-Text | 254-255 | |
| Yusuke Yoshimoto; Thai Hoa Dang; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura | |||
| We propose interaction design of a map navigation system that displays Google Maps on a tabletop display, and Google Earth and Google Street View on the wall display. In Google Maps on the table, we introduce the use of hand gestures to pan, rotate, and zoom for providing basic interaction. We also introduce a function that enables comparison of maps of different scales and locations. In Google Earth on the wall, the user can see the 3D world from a bird's-eye view by linking Google Maps and the proposed hang glider device, creating the sense of flying on Google Maps. Google Street View can be displayed on the wall and the user can move around it by using the finger gesture of walking on the table. | |||
| Tap2Count: numerical input for interactive tabletops | | BIBA | Full-Text | 256-257 | |
| Tobias Hesselmann; Wilko Heuten; Susanne Boll | |||
| We present a technique to enter numbers on interactive multi-touch tabletops using the 10 fingers of both hands. We recognize the number of fingers simultaneously touching the screen and interpret them as digits from 0 to 9 to represent any number in the decimal system. Our technique works independent from the location and orientation of users at the tabletop and does not occlude screen space, rendering it an interesting alternative to commonly used techniques for numerical input on touchscreens, such as virtual keyboards and handwriting recognition systems. | |||
| An interaction on a flat panel display using a planar 1-DOF electrostatic actuator | | BIBA | Full-Text | 258-259 | |
| Kota Amano; Akio Yamamoto | |||
| This paper demonstrates a new user-computer interaction on a flat panel display through physical motions of a paper sheet. A user handles a paper sheet directly by hand, and at the same time, a computer program drives the sheet using a planar electrostatic actuator. Thus a user and a computer can interact with each other through physical media. A simple program developed as a preliminary demonstration successfully demonstrated a new type of interaction. | |||
| Optically hiding of tabletop information with polarized complementary image projection: your shadow reveals it! | | BIBA | Full-Text | 260-261 | |
| Mariko Miki; Daisuke Iwai; Kosuke Sato | |||
| We propose the concept and implementation of a graphical information hiding technique for interactive tabletops where users can view the information by simply casting real shadows. We placed three projectors (one in the rear and two in the front) in such a way that the rear one projects graphical information onto a tabletop surface, and the front ones project a complementary image, so that the combined image displayed on the surface becomes uniformly gray, thus hiding the information from the viewer. Users can view the hidden information by blocking the light from the front projector, revealing the complementary image that is being projected onto the occluder. We use the other front projector and polarization filters to make the complementary image projected onto the occluder also uniformly gray. Because the technique completely relies on optical phenomena, users can interact with the system without suffering from any false recognitions or delays. | |||
| Extending interactions into hoverspace using reflected light | | BIBA | Full-Text | 262-263 | |
| Dmitry Pyryeskin; Mark Hancock; Jesse Hoey | |||
| Multi-touch tables are becoming increasingly popular and much research is dedicated to developing suitable interaction paradigms. There also exist multiple techniques aimed at extending interactions into the hoverspace -- the space directly above a multi-touch table. We propose a novel hover-space method that does not require any additional hardware or modification of existing vision-based multi-touch tables. Our prototype system was developed on a Diffused Surface Illumination (DSI) vision-based multi-touch set up, and uses light reflected from a person's palm to estimate its position in 3D space above the table. | |||
| SourceVis: a tool for multi-touch software visualization | | BIBA | Full-Text | 264-265 | |
| Craig Anslow; Stuart Marshall; James Noble; Robert Biddle | |||
| Most software visualization systems and tools are designed from a single-user perspective and are bound to the desktop and Integrated Development Environments (IDEs). These design decisions do not allow users to easily navigate through software visualizations or to analyse software collaboratively. We have developed SourceVis, a collaborative multi-touch software visualization prototype for multi-touch tables. In this paper we describe the visualizations and interaction capabilities of our prototype. | |||
| Multi-touch wall display system using multiple laser range scanners | | BIBA | Full-Text | 266-267 | |
| Shigeyuki Hirai; Keigo Shima | |||
| In this work we present a multi-touch wall display system that equips easily and considers occlusions problem using multiple range scanners. Our system is implemented on an existing large display embedded in a wall with two laser range scanners. Each scanner detects touches events including positions and/or areas. Some touch positions causes relative occlusions due to a range scanner. We reduce the problem by using multiple scanners. If the touch events from different scanners are the same, these are combined into one touch event. Detected multi-touch events are sent to the network as TUIO events. This system is simple and adaptable to various existing displays including front projection screens, and to various TUIO applications. | |||
| Novel interaction techniques by combining hand and foot gestures on tabletop environments | | BIBA | Full-Text | 268-269 | |
| Nuttapol Sangsuriyachot; Haipeng Mi; Masanori Sugimoto | |||
| Despite the convenience and intuitiveness of multi-touch gestures, there are some tasks that do not allow users to effectively conduct even using two-handed gestures. We propose novel input techniques of combining hand and foot gestures to enhance user interactions on tabletop environments. We have developed an early prototype of a sensor-based foot platform which recognizes subtle foot gestures, designed foot gestures and interactions which support simultaneous users' tasks and obtained their informal feedback. | |||
| Rainterior: an interactive water display with illuminating raindrops | | BIBA | Full-Text | 270-271 | |
| Erika Okude; Yasuaki Kakehi | |||
| In our daily lives, we often feel depressed on rainy days. In this research, we aim to relieve these kinds of unpleasant feelings by providing a novel entertainment system using raindrops. More concretely, we propose an interactive display named "rainterior" that can illuminate collisions of raindrops on a water surface by using a projector-camera system. According to the positions of raindrops, the appearance of the water surface changes and sounds are generated in real-time. In this paper, we describe the concept, system design and implementation of rainterior. | |||
| Process Pad: a multimedia multi-touch learning platform | | BIBA | Full-Text | 272-273 | |
| Jain Kim; Colin Meltzer; Shima Salehi; Paulo Blikstein | |||
| This paper introduces Process Pad, an interactive, low-cost multi-touch tabletop platform designed to capture students' thought process and facilitate their explanations. The goal of Process Pad is to elicit students' think-aloud narratives that would otherwise be tacit, in other words, "learn to explain," and "explain to learn." Our focus is on identifying and understanding key design factors in creating opportunities for students to externalize and represent their mental models using multimodal data. From our user observations, we gleaned four design principles as essential criteria based upon which we refined our design: flexibility, tangibility, collaboration and affordability. | |||
| Multimodal feedback for tabletop interactions | | BIBA | Full-Text | 274-275 | |
| Christopher McAdam; Stephen Brewster | |||
| This paper presents a study into the use of different modalities in providing per contact feedback for interactions on tabletop computers. We replicate the study by Wigdor et al. [3] and confirm their results, and extend the study to examine not just visual feedback but also audio and tactile feedback. We show that these different modalities can be as effective as the visual system used in that study, and are preferred by the participants. | |||
| Mobile phones as a tactile display for tabletop typing | | BIBA | Full-Text | 276-277 | |
| Christopher McAdam; Stephen Brewster | |||
| This paper presents a study into the use of mobile phones as private tactile displays for interactions with tabletop computers. Text entry performance on tabletop computers is often poor due to the soft keyboards that are normally used. We propose using the vibration motor in the user's mobile phone to provide tactile feedback to improve the experience and performance of typing on tabletop computers. We ran an experiment to compare the effects of two different sets of tactile feedback when delivered at two different distal locations on the body (wrist and trouser pocket) using a high quality actuator. The results showed that both sets of feedback improved the text entry rates at the two locations, and that providing more complex feedback produces greater benefits than simplified feedback. We also establish a baseline for standard text entry performance by novice users on tabletop computers using MacKenzie's phrase set. | |||
| Dual mode IR position and state transfer for tangible tabletops | | BIBA | Full-Text | 278-279 | |
| Ali Alavi; Andreas Kunz; Masanori Sugimoto; Morten Fjeld | |||
| This paper presents a method for tracking multiple active tangible devices on tabletops. Most tangible devices for tabletops use infrared to send information about their position, orientation, and state. The method we propose can be realized as a tabletop system using a low-cost camera to detect position and a low-cost infrared (IR) receiver to detect the state of each device. Since two different receivers (camera and IR-receiver) are used simultaneously we call the method dual mode. Using this method, it is possible to use devices with a large variation of states simultaneously on a tabletop, thus having more interactive devices on the surface. | |||
| Digital board games: peripheral activity eludes ennui | | BIBA | Full-Text | 280-281 | |
| Aleksander Krzywinski; Weiqin Chen; Erlend Røsjø | |||
| In this paper the authors argue that the affordance of peripheral activities in multi-modal tabletop interfaces for digital board games increase the user experience of passive players compared to WIMP interfaces. The authors have made a prototype implementation of Carcassonne with a multi-modal, direct manipulation interface and conducted a user study to explore the effects of this interface and found that it appears to influence the experience and fun factor of players of digital board games. | |||
| TESIS: turn every surface into an interactive surface | | BIB | Full-Text | D1 | |
| Andrea Bellucci; Alessio Malizia; Ignacio Aedo | |||
| TUIC open source SDK: enabling tangible interaction on unmodified capacitive multi-touch displays | | BIB | Full-Text | D2 | |
| Neng-Hao Yu; Sung-Sheng Tsai; Mike Y. Chen; Yi-Ping Hung | |||
| WobblySurface: tactile feedback by holding/releasing a surface panel | | BIB | Full-Text | D3 | |
| Takashi Nagamatsu; Sachio Echizen; Teruhiko Akazawa; Junzo Kamahara | |||
| Biri-biri: pressure-sensitive touch interface with electrical stimulation | | BIB | Full-Text | D4 | |
| Haruna Eto; Yasushi Matoba; Toshiki Sato; Kentaro Fukuchi; Hideki Koike | |||
| FuSA2 touch display | | BIB | Full-Text | D5 | |
| Kosuke Nakajima; Yuichi Itoh; Takayuki Tsukitani; Kazuyuki Fujita; Kazuki Takashima; Yoshifumi Kitamura; Fumio Kishino | |||
| Hovering fingertips detection on diffused surface illumination | | BIB | Full-Text | D6 | |
| Nao Akechi; Tsukasa Mizumata; Ryuuki Sakamoto | |||
| Tocalize: token localization on tablet computer displays | | BIB | Full-Text | D7 | |
| Stefan Krageloh; Tobias Bliem; Jarg Pickel; Christian Vey; Rinat Zeh | |||
| Interactive surface that can dynamically change the shape and touch sensation | | BIB | Full-Text | D8 | |
| Toshiki Sato; Yasushi Matoba; Nobuhiro Takahashi; Hideki Koike | |||
| Ficon: a tangible display device for tabletop system using optical fiber | | BIB | Full-Text | D9 | |
| Kentaro Fukuchi; Ryusuke Nakabayashi; Toshiki Sato; Yuta Takada | |||
| Core infrastructures and interfaces for context-travel at a tabletop | | BIB | Full-Text | S1 | |
| Andrew Clayphan | |||
| Interaction with stereoscopic data on and above multi-touch surfaces | | BIB | Full-Text | S2 | |
| Florian Daiber | |||
| Mining the collaborative learning process at the tabletop to offer adapted support | | BIB | Full-Text | S3 | |
| Roberto MartÃnez Maldonado | |||
| Supporting note taking in co-located collaborative visual analytics on large interactive surfaces | | BIB | Full-Text | S4 | |
| Narges Mahyar | |||