HCI Bibliography Home | HCI Conferences | MOBILEHCI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
MOBILEHCI Tables of Contents: 02030405060708091011121314

Proceedings of 2014 Conference on Human-Computer Interaction with Mobile Devices and Services

Fullname:MobileHCI'14: 16th International Conference on Human-Computer Interaction with Mobile Devices and Services
Editors:Aaron Quigley; Sara Diamond; Pourang Irani; Sriram Subramanian
Location:Toronto, Canada
Dates:2014-Sep-23 to 2014-Sep-26
Standard No:ISBN: 978-1-4503-3004-6; ACM DL: Table of Contents; hcibib: MOBILEHCI14
Links:Conference Website
  1. Keynote address
  2. Social networks & input and interaction
  3. Devices and interaction design
  4. Context awareness
  5. 3D
  6. Keynote address
  7. E-learning
  8. Gesture interaction
  9. User-centered design
  10. Input and interaction
  11. Gesture & text-entry
  12. Recommender systems and CSCW
  13. Doctoral consortium
  14. Demonstrations
  15. Poster Presentations
  16. Interactive tutorials
  17. Workshop summaries
  18. Design competition & future innovations
  19. Industrial case studies

Keynote address

Collective mobile interaction in urban spaces BIBAFull-Text 1-2
  Amahl Hazelton
Amahl Hazelton works at the convergence of art, event entertainment, architecture, urban design and digital technology. With a Masters degree in Urban Planning from McGill University, he is interested in new kinds of urban place-making defined as much by digital technology and experience as by physical form. Inspiring urban gatherings, the technology / city interfaces that he directs cross technical boundaries, integrating massive urban projections with smart handset based control systems. X-Agora by Moment Factory is a scalable, connected, real time media management and playback system that can integrate motion sensors, video screens, and projectors with smartphone sensors, multi media tools and data sources.

Social networks & input and interaction

Identity, identification and identifiability: the language of self-presentation on a location-based mobile dating app BIBAFull-Text 3-12
  Jeremy Birnholtz; Colin Fitzpatrick; Mark Handel; Jed R. Brubaker
Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.
Understanding localness of knowledge sharing: a study of Naver KiN 'here' BIBAFull-Text 13-22
  Sangkeun Park; Yongsung Kim; Uichin Lee; Mark Ackerman
In location-based social Q&A, the questions related to a local community (e.g. local services and places) are typically answered by local residents (i.e. who have the local knowledge). In this work, we wanted to deepen our understanding of the localness of knowledge sharing through investigating the topical and typological patterns related to the geographic characteristics, geographic locality of user activities, and motivations of local knowledge sharing. To this end, we analyzed a 12-month period Q&A dataset from Naver KiN "Here" and a supplementary survey dataset from 285 mobile users. Our results revealed several unique characteristics of location-based social Q&A. When compared with conventional social Q&A sites, Naver KiN "Here" had very different topical/typological patterns. Naver KiN "Here" exhibited a strong spatial locality where the answerers mostly had 1-3 spatial clusters of contributions, the topical distributions varied widely across different districts, and a typical cluster spanned a few neighboring districts. In addition, we uncovered unique motivators, e.g. ownership of local knowledge and sense of local community. The findings reported in the paper have significant implications for the design of Q&A systems, especially location-based social Q&A systems.
Comparing evaluation methods for encumbrance and walking on interaction with touchscreen mobile devices BIBAFull-Text 23-32
  Alexander Ng; John H. Williamson; Stephen A. Brewster
In this paper, two walking evaluation methods were compared to evaluate the effects of encumbrance while the preferred walking speed (PWS) is controlled. Users frequently carry cumbersome objects (e.g. shopping bags) and use mobile devices at the same time which can cause interaction difficulties and erroneous input. The two methods used to control the PWS were: walking on a treadmill and walking around a predefined route on the ground while following a pacesetter. The results from our target acquisition experiment showed that for ground walking at 100% of PWS, accuracy dropped to 36% when carrying a bag in the dominant hand while accuracy reduced to 34% for holding a box under the dominant arm. We also discuss the advantages and limitations of each evaluation method when examining encumbrance and suggest treadmill walking is not the most suitable approach to use if walking speed is an important factor in future mobile studies.
Field evaluation of a camera-based mobile health system in low-resource settings BIBAFull-Text 33-42
  Nicola Dell; Ian Francis; Haynes Sheppard; Raiva Simbi; Gaetano Borriello
The worldwide adoption of mobile devices presents an opportunity to build mobile systems to support health workers in low-resource settings. This paper presents an in-depth field evaluation of a mobile system that uses a smartphone's built-in camera and computer vision to capture and analyze diagnostic tests for infectious diseases. We describe how health workers integrate the system into their daily clinical workflow and detail important differences in system usage between small clinics and large hospitals that could inform the design of future mobile health systems. We also describe a variety of strategies that health workers developed to overcome poor network connectivity and transmit data to a central database. Finally, we show strong agreement between our system's computed diagnoses and trained health workers' visual diagnoses, which suggests that our system could aid disease diagnosis in a variety of scenarios. Our findings will help to guide ministries of health and other stakeholders working to deploy mobile health systems in similar environments.
Was it worth the hassle?: ten years of mobile HCI research discussions on lab and field evaluations BIBAFull-Text 43-52
  Jesper Kjeldskov; Mikael B. Skov
Evaluation is considered one of the major cornerstones of human-computer interaction (HCI). During the last decade, several studies have discussed pros and cons of lab and field evaluations. Based on these discussions, we conduct a review to explore the past decade of mobile HCI research on field and lab evaluation, investigating responses in the literature to the "is it worth the hassle?" paper from 2004. We find that while our knowledge and experience with both lab and field studies have grown considerably, there is still no definite answer to the lab versus field question. In response we suggest that the real question is not if -- but when and how -- to go into the field. In response we suggest moving beyond usability evaluations, and to engage with field studies that are truly in-the-wild, and longitudinal.

Devices and interaction design

Comparing pointing techniques for grasping hands on tablets BIBAFull-Text 53-62
  Katrin Wolf; Niels Henze
With the recent success of tablet devices a new device type became available for mobile interaction. Just as for mobile phones, touch is the dominant way people interact with tablets. In contrast to the much smaller phones a firm grip with both hands is needed to securely hold tablet devices. While a large body of work has investigated touch interaction on smaller devices, is little empirical research has been carried out on touch-based pointing while holding the device with both hands. To understand touch-based interactions using tablet devices, we conducted an experiment to compare four pointing techniques on both the front and back of the devices while it was held in landscape format. We compare direct touch with the following alternatives for selecting targets, indirect pointing on a virtual touchpad, an inverse cursor, and a miniature interaction area. While direct touch is 35% faster than the fastest alternative, only 74% of the touchscreen and 64% of a back-of-device can be reached by each hand. We show that among the indirect pointing techniques, the miniaturized interaction area is significantly faster and received the best subjective ratings. We conclude that a miniaturized interaction area is a viable alternative to direct touch especially on the backside of tablet devices.
BoD taps: an improved back-of-device authentication technique on smartphones BIBAFull-Text 63-66
  Luis A. Leiva; Alejandro Català
Previous work in the literature has shown that back-of-device (BoD) authentication is significantly more secure than standard front-facing approaches. However, the only BoD method available to date (Bod Shapes) is difficult to perform, especially with one hand. In this paper we propose Bod Taps, a novel approach that simplifies BoD authentication while improving its usage. A controlled evaluation with 12 users revealed that Bod Taps and Bod Shapes perform equally good at unlocking the device, but Bod Taps allows users to enter passwords about twice faster than Bod Shapes. Moreover, Bod Taps is perceived as being more usable and less frustrating than Bod Shapes, either using one or two hands.
Toffee: enabling ad hoc, around-device interaction with acoustic time-of-arrival correlation BIBAFull-Text 67-76
  Robert Xiao; Greg Lew; James Marsanico; Divya Hariharan; Scott Hudson; Chris Harrison
The simple fact that human fingers are large and mobile devices are small has led to the perennial issue of limited surface area for touch-based interactive tasks. In response, we have developed Toffee, a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Previous time-of-arrival based systems have required semi-permanent instrumentation of the surface and were too large for use in mobile devices. Our approach requires only a hard tabletop and gravity -- the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right.
BackPat: one-handed off-screen patting gestures BIBAFull-Text 77-80
  Karsten Seipp; Kate Devlin
We present BackPat -- a technique for supporting one-handed smartphone operation by using pats of the index finger, middle finger or thumb on the back or side of the device. We devise a novel method using the device's microphone and gyroscope that enables finger-specific gesture detection and explore efficiency and user acceptance of gesture execution for each finger in three user studies with novice BackPat users.
Around-device devices: my coffee mug is a volume dial BIBAFull-Text 81-90
  Henning Pohl; Michael Rohs
For many people their phones have become their main everyday tool. While phones can fulfill many different roles they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. We propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Such around-device devices are a class of interactors that do not require users to bring special tangibles, but repurpose items already found in the user's surroundings. In a survey study, we determine which places and objects are available to around-device devices. Furthermore, in an elicitation study, we observe what objects users would use for ten interactions.

Context awareness

Contextual experience sampling of mobile application micro-usage BIBAFull-Text 91-100
  Denzil Ferreira; Jorge Goncalves; Vassilis Kostakos; Louise Barkhuus; Anind K. Dey
Research suggests smartphone users face 'application overload', but literature lacks an in-depth investigation of how users manage their time on smartphones. In a 3-week study we collected smartphone application usage patterns from 21 participants to study how they manage their time interacting with the device. We identified events we term application micro-usage: brief bursts of interaction with applications. While this practice has been reported before, it has not been investigated in terms of the context in which it occurs (e.g., location, time, trigger and social context). In a 2-week follow-up study with 15 participants, we captured participants? context while micro-using, with a mobile experience sampling method (ESM) and weekly interviews. Our results show that about approximately 40% of application launches last less than 15 seconds and happen most frequently when the user is at home and alone. We further discuss the context, taxonomy and implications of application micro-usage in our field. We conclude with a brief reflection on the relevance of short-term interaction observations for other domains beyond mobile phones.
Mobile cloud storage: a contextual experience BIBAFull-Text 101-110
  Karel Vandenbroucke; Denzil Ferreira; Jorge Goncalves; Vassilis Kostakos; Katrien De Moor
In an increasingly connected world, users access personal or shared data, stored "in the cloud" (e.g., Dropbox, Skydrive, iCloud) with multiple devices. Despite the popularity of cloud storage services, little work has focused on investigating cloud storage users' Quality of Experience (QoE), in particular on mobile devices. Moreover, it is not clear how users' context might affect QoE. We conducted an online survey with 349 cloud service users to gain insight into their usage and affordances. In a 2-week follow-up study, we monitored mobile cloud service usage on tablets and smartphones, in real-time using a mobile-based Experience Sampling Method (ESM) questionnaire. We collected 156 responses on in-situ context of use for Dropbox on mobile devices. We provide insights for future QoE-aware cloud services by highlighting the most important mobile contextual factors (e.g., connectivity, location, social, device), and how they affect users' experiences while using such services on their mobile devices.
A wearable virtual guide for context-aware cognitive indoor navigation BIBAFull-Text 111-120
  Qianli Xu; Liyuan Li; Joo Hwee Lim; Cheston Yin Chet Tan; Michal Mukawa; Gang Wang
In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.
Learning to recognise disruptive smartphone notifications BIBAFull-Text 121-124
  Jeremiah Smith; Anna Lavygina; Jiefei Ma; Alessandra Russo; Naranker Dulay
Short term studies in controlled environments have shown that user behaviour is consistent enough to predict disruptive smartphone notifications. However, in practice, user behaviour changes over time (concept drift) and individual user preferences need to be considered. There is a lack of research on which methods are best suited for predicting disruptive smartphone notifications longer-term, taking into account varying error costs. In this paper we report on a 16 week field study comparing how well different learners perform at mitigating disruptive incoming phone calls.


Exploring smartphone-based interaction with overview+detail interfaces on 3D public displays BIBAFull-Text 125-134
  Louis-Pierre Bergé; Marcos Serrano; Gary Perelman; Emmanuel Dubois
As public displays integrate 3D content, Overview+Detail (O+D) interfaces on mobile devices will allow for a personal 3D exploration of the public display. In this paper we study the properties of mobile-based interaction with O+D interfaces on 3D public displays. We evaluate three types of existing interaction techniques for the 3D translation of the Detail view: touchscreen input, mid-air movement of the mobile device (Mid-Air Phone) and mid-air movement of the hand around the device (Mid-Air Hand). In a first experiment, we compare the performance and user preference of these three types of techniques with previous training. In a second experiment, we study how well the two mid-air techniques perform with no training or human help to imitate usual conditions in public context. Results reveal that Mid-Air Phone and Hand perform best with training. However, without training or human help Mid-Air Phone is more intuitive and performs better on the first trial. Interestingly, on both experiments users preferred Mid-Air Hand. We conclude with a discussion on the use of mobile devices to interact with public O+D interfaces.
Identifying suitable projection parameters and display configurations for mobile true-3D displays BIBAFull-Text 135-143
  Marcos Serrano; Dale Hildebrandt; Sriram Subramanian; Pourang Irani
We present a two-part exploration on mobile true-3D displays, i.e. displaying volumetric 3D content in mid-air. We first identify and study the parameters of a mobile true-3D projection, in terms of the projection's distance to the phone, angle to the phone, display volume and position within the display. We identify suitable parameters and constraints, which we propose as requirements for developing mobile true-3D systems. We build on the first outcomes to explore methods for coordinating the display configurations of the mobile true-3D setup. We explore the resulting design space through two applications: 3D map navigation and 3D interior design. We discuss the implications of our results for the future design of mobile true-3D displays.
Portallax: bringing 3D displays capabilities to handhelds BIBAFull-Text 145-154
  Diego Martinez Plasencia; Abhijit Karnik; Jonatan Martinez Muñoz; Sriram Subramanian
We present Portallax, a clip-on technology to retrofit mobile devices with 3D display capabilities. Available technologies (e.g. Nintendo 3DS or LG Optimus) and clip-on solutions (e.g. 3DeeSlide and Grilli3D) force users to have a fixed head and device positions. This is contradictory to the nature of a mobile scenario, and limits the usage of interaction techniques such as tilting the device to control a game. Portallax uses an actuated parallax barrier and face tracking to realign the barrier's position to the user's position. This allows us to provide stereo, motion parallax and perspective correction cues in 60 degrees in front of the device. Our optimized design of the barrier minimizes colour distortion, maximizes resolution and produces bigger view-zones, which support 81% of adults' interpupillary distances and allow eye tracking implemented with the front camera. We present a reference implementation, evaluate its key features and provide example applications illustrating the potential of Portallax.

Keynote address

Spaces of innovation in complex UX design BIBAFull-Text 155-156
  Mark Vanderbeeken
Leading practice in mobile user experience design presents complex opportunities and challenges not always fully revealed in academic exploration and research. Addressing users with cultural and behavioural differences; understanding economic imbalances and wide demographic ranges amongst individuals and networks; including ethical considerations such as right to privacy and confidentiality, authorship, and transparency; perception and cognition; larger sustainability concerns; and understanding often ignored biases within the client group -- these many factors define the contemporary user experience design landscape. How can we create a space for innovation that takes these constraints and people's context and aspirations into account? Mark Vanderbeeken will illustrate these challenges with examples from Experientia's practice, particularly through a project they recently conducted with Intel.


Interaction for reading comprehension in mobile devices BIBAFull-Text 157-161
  Rafael Veras; Erik Paluka; Meng-Wei Chang; Vivian Tsang; Fraser Shein; Christopher Collins
This paper introduces a touch-based reading interface for tablets designed to support vocabulary acquisition, text comprehension, and reduction of reading anxiety. Touch interaction is leveraged to allow direct replacement of words with synonyms, easy access to word definitions and seamless dialogue with a personalized model of the reader's vocabulary. We discuss how fluid interaction and direct manipulation coupled with natural language processing can help address the reading needs of audiences such as school-age children and English as Second Language learners.
A long-term field study on the adoption of smartphones by children in panama BIBAFull-Text 163-172
  Elba del Carmen Valderrama Bahamóndez; Bastian Pfleging; Niels Henze; Albrecht Schmidt
Computing technology is currently adopted in emerging countries. Especially mobile phones and smart phones become widely used -- with a much higher penetration than traditional computers. In our work we investigate how computing technologies and particularly mobile devices can support education. While previous work focused on controlled experiments, in this paper we present the results of a 20 weeks long study of mobile phone usage in an emerging region. Our aim was not only to investigate how the phones are used for education but also to learn how they are adopted by children in daily life. By logging screenshots, we used an unsupervised approach that allowed to unobtrusively observe usage patterns without the presence of researchers. Instead of offering tailored teaching applications, we used general-purpose applications to support teaching and found that the phone itself was an empowering technology similar to pen and paper. Based on a detailed analysis of actual use in a natural setting, we derived a set of typical use cases for mobile phones in education and describe how they change learning. From in-depth interviews with a teacher, selected guardians and pupils we show that introducing mobiles phones has great potential for supporting education in emerging regions.

Gesture interaction

Understanding shortcut gestures on mobile touch devices BIBAFull-Text 173-182
  Benjamin Poppinga; Alireza Sahami Shirazi; Niels Henze; Wilko Heuten; Susanne Boll
Touch gestures become steadily more important with the ongoing success of touch screen devices. Compared to traditional user interfaces, gestures have the potential to lower cognitive load and the need for visual attention. However, nowadays gestures are defined by designers and developers and it is questionable if these meet all user requirements. In this paper, we present two exploratory studies that investigate how users would use unistroke touch gestures for shortcut access to a mobile phone's key functionalities. We study the functions that users want to access, the preferred activators for gesture execution, and the shapes of the user-invented gestures. We found that most gestures trigger applications, letter-shaped gestures are preferred, and the gestures should be accessible from the lock screen, the wallpaper, and the notification bar. We conclude with a coherent, unambiguous set of gestures for the 20 most frequently accessed functions, which can inform the design of future gesture-controlled applications.
JuxtaPinch: exploring multi-device interaction in collocated photo sharing BIBAFull-Text 183-192
  Heidi Selmer Nielsen; Marius Pallisgaard Olsen; Mikael B. Skov; Jesper Kjeldskov
Mobile HCI research has started to investigate multi-device interaction as we often have more devices at our immediate disposal. We present an application called JuxtaPinch that allows users to share photos while being collocated using several different devices, i.e. mobile phones and tablets, at the same time. JuxtaPinch use pinching to connect devices and it enables flexible physical positioning of devices and supports partial viewing of photos. Our evaluation showed that JuxtaPinch enabled participants to experience their own familiar photos in new ways known as defamiliarization. It further enabled participants to engage jointly in playful interaction with the photos and with each other. However, we also found that multiple device collocated photo sharing challenges aspects of synchronization and coordination.
Are you comfortable doing that?: acceptance studies of around-device gestures in and for public settings BIBAFull-Text 193-202
  David Ahlström; Khalad Hasan; Pourang Irani
Several research groups have demonstrated advantages of extending a mobile device's input vocabulary with in-air gestures. Such gestures show promise but are not yet being integrated onto commercial devices. One reason for this might be the uncertainty about users' perceptions regarding the social acceptance of such around-device gestures. In three studies, performed in public settings, we explore users' and spectators' attitudes about using around-device gestures in public. The results show that people are concerned about others' reactions. They are also sensitive and selective regarding where and in front of whom they would feel comfortable using around-device gestures. However, acceptance and comfort are strongly linked to gesture characteristics, such as, gesture size, duration and in-air position. Based on our findings we present recommendations for around-device input designers and suggest new approaches for evaluating the social acceptability of novel input methods.
Monox: extensible gesture notation for mobile devices BIBAFull-Text 203-212
  Roman Ganhör; Wolfgang Spreicer
The rise of modern smartphones brought gesture-based interaction to our daily lives. As the number of different operating systems and graphical user interfaces increases, designers and researchers can benefit from a common notation for mobile interaction design. In this paper, we present a concept of an extensible sketching notation for mobile gestures. The proposed notation, Monox, provides a common basis for collaborative design and analysis of mobile interactions. Monox is platform independent and enables general discussions and negotiations on topics of mobile gestures. An extensive evaluation showed the practicability and ability of Monox to serve as a common denominator for discussion and communication within interdisciplinary groups of researchers, designers and developers.
Automating UI tests for mobile applications with formal gesture descriptions BIBAFull-Text 213-222
  Marc Hesenius; Tobias Griebe; Stefan Gries; Volker Gruhn
Touch- and gesture-based interfaces are common in applications for mobile devices. By evolving into mass market products, smartphones and tablets created an increased need for specialized software engineering methods. To ensure high quality applications, constant and efficient testing is crucial in software development. However, testing mobile applications is still cumbersome, time-consuming and error-prone. One reason is the devices' focus on touch-based interaction -- gestures cannot be easily incorporated into automated application tests. We present an extension to the popular Calabash testing framework solving this problem by allowing to describe gestures with a formal language in tests scripts.

User-centered design

100 days of iPhone use: understanding the details of mobile device use BIBAFull-Text 223-232
  Barry Brown; Moira McGregor; Donald McMillan
Internet connected mobile devices are an increasingly ubiquitous part of our everyday lives and we present here the results from unobtrusive audio-video recordings of iPhone use -- over 100 days of device use collected from 15 users. The data reveals for analysis the everyday, moment-by-moment use of contemporary mobile phones. Through video analysis of usage we observed how messages, social media and internet use are integrated and threaded into daily life, interaction with others, and everyday events such as transport, delays, establishment choice and entertainment. We document various aspects of end-user mobile device usage, starting with understanding how it is occasioned by context. We then characterise the temporal and sequential nature of use. Lastly, we discuss the social nature of mobile phone usage. Beyond this analysis, we reflect on how to draw these points into ideas for design.
An in-situ study of mobile phone notifications BIBAFull-Text 233-242
  Martin Pielot; Karen Church; Rodrigo de Oliveira
Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.
ProactiveTasks: the short of mobile device use sessions BIBAFull-Text 243-252
  Nikola Banovic; Christina Brant; Jennifer Mankoff; Anind Dey
Mobile devices have become powerful ultra-portable personal computers supporting not only communication but also running a variety of complex, interactive applications. Because of the unique characteristics of mobile interaction, a better understanding of the time duration and context of mobile device uses could help to improve and streamline the user experience. In this paper, we first explore the anatomy of mobile device use and propose a classification of use based on duration and interaction type: glance, review, and engage. We then focus our investigation on short review interactions and identify opportunities for streamlining these mobile device uses through proactively suggesting short tasks to the user that go beyond simple application notifications. We evaluate the concept through a user evaluation of an interactive lock screen prototype, called ProactiveTasks. We use the findings from our study to create and explore the design space for proactively presenting tasks to the users. Our findings underline the need for a more nuanced set of interactions that support short mobile device uses, in particular review sessions.
User challenges and successes with mobile payment services in North America BIBAFull-Text 253-262
  Serena Hillman; Carman Neustaedter; Erick Oduor; Carolyn Pang
Mobile payment services have recently emerged in North America where users pay for items using their smartphones. Yet we have little understanding of how people are making use of them and what successes and challenges they have experienced. As a result, we conducted a diary and interview study of user behaviors, motivations, and first impressions of mobile payment services in North America in order to understand how to best design for mobile payment experiences. Participants used a variety of services, including Google Wallet, Amazon Payments, LevelUp, Square and company apps geared towards payments (e.g., Starbucks). Our findings show that users experience challenges related to mental model development, pre-purchase anxiety and trust issues, despite enjoying the gamification, ease-of-use, and support for routine purchases with mobile payments. This suggests designing a better mobile payment experience through the incorporation of users' routines and behaviors, gamification and trust mechanism development.
TalkZones: section-based time support for presentations BIBAFull-Text 263-272
  Bahador Saket; Sijie Yang; Hong Tan; Koji Yatani; Darren Edge
Managing time while presenting is challenging, but mobile devices offer both convenience and flexibility in their ability to support the end-to-end process of setting, refining, and following presentation time targets. From an initial HCI-Q study of 20 presenters, we identified the need to set such targets per 'zone' of consecutive slides (rather than per slide or for the whole talk), as well as the need for feedback that accommodates two distinct attitudes to time management. These findings led to the design of TalkZones, a mobile application for timing support. When giving a 20-slide, 6m40s rehearsed but interrupted talk, 12 participants who used TalkZones registered a mean overrun of only 8s, compared with 1m49s for 12 participants who used a regular timer. We observed a similar 2% overrun in our final study of 8 speakers giving rehearsed 30-minute talks in 20 minutes. Overall, we show that TalkZones can encourage presenters to advance slides before it is too late to recover, even under the adverse timing conditions of short and shortened talks.

Input and interaction

Direct manipulation video navigation on touch screens BIBAFull-Text 273-282
  Cuong Nguyen; Yuzhen Niu; Feng Liu
Direct Manipulation Video Navigation (DMVN) systems allow a user to directly drag an object of interest along its motion trajectory and have been shown effective for space-centric video browsing tasks. This paper designs touch-based interface techniques to support DMVN on touchscreen devices. While touch screens can suit DMVN systems naturally and enhance the directness during video navigation, the fat finger problems, such as precise selection and occlusion handling, must be properly addressed. In this paper, we discuss the effect of the fat finger problems on DMVN and develop three touch-based object dragging techniques for DMVN on touch screens, namely Offset Drag, Window Drag, and Drag Anywhere. We conduct user studies to evaluate our techniques as well as two baseline solutions on a smartphone and a desktop touch screen. Our studies show that two of our techniques can support DMVN on touch screen devices well and perform better than the baseline solutions.
Enhancing KLM (keystroke-level model) to fit touch screen mobile devices BIBAFull-Text 283-286
  Karim El Batran; Mark D. Dunlop
This PVA-shortpaper introduces an enhancement to the Keystroke-Level Model (KLM) by extending it with three new operators to describe interactions on mobile touchscreen devices. Based on Fitts's Law we modelled a performance measure estimate equation for each common touch screen interaction. Three prototypes were developed to serve as a test environment in which to validate Fitts's equations and estimate the parameters for these interactions. A total of 3090 observations were made with a total of 51 users. While the studies confirmed each interaction fitted well to Fitts's Law for most interactions, it was noticed that Fitts's Law does not fit well for interactions with an Index of Difficulty exceeding 4 bits, highlighting a possible maximum comfortable stretch. Based on results, the following approximate movement times for KLM are suggested: 70ms for a short untargeted swipe, 200ms for a half-screen sized zoom, and 80ms for an icon pointing from a home position. These results could be used by developers of mobile phone and tablet applications to describe tasks as a sequence of the operators used and to predict user interaction times prior to creating prototypes.
Around-body interaction: sensing & interaction techniques for proprioception-enhanced input with mobile devices BIBAFull-Text 287-290
  Xiang 'Anthony' Chen; Julia Schwarz; Chris Harrison; Jennifer Mankoff; Scott Hudson
The space around the body provides a large interaction volume that can allow for big interactions on small mobile devices. However, interaction techniques making use of this opportunity are underexplored, primarily focusing on distributing information in the space around the body. We demonstrate three types of around-body interaction including canvas, modal and context-aware interactions in six demonstration applications. We also present a sensing solution using standard smartphone hardware: a phone's front camera, accelerometer and inertia measurement units. Our solution allows a person to interact with a mobile device by holding and positioning it between a normal field of view and its vicinity around the body. By leveraging a user's proprioceptive sense, around-body Interaction opens a new input channel that enhances conventional interaction on a mobile device without requiring additional hardware.
Investigating the effectiveness of peephole interaction for smartwatches in a map navigation task BIBAFull-Text 291-294
  Frederic Kerber; Antonio Krüger; Markus Löchtefeld
With the increasing availability of smartwatches the question of suited input modalities arises. While direct touch input comes at the cost of the fat-finger problem, we propose to use a dynamic peephole to explore larger content such as websites or maps. In this paper, we present the results of a user study comparing the performance of static and dynamic peephole interactions for a map navigation task on a smartwatch display. As a first method, we investigated the static peephole metaphor where the displayed map is moved on the device via direct touch interaction. In contrast, for the second method -- the dynamic peephole -- the device is moved and the map is static with respect to an external frame of reference. We compared both methods in terms of task performance and perceived user experience. The results show that the dynamic peephole interaction performs significantly more slowly in terms of task completion time.
ambiPad: enriching mobile digital media with ambient feedback BIBAFull-Text 295-298
  Markus Löchtefeld; Nadine Lautemann; Sven Gehring; Antonio Krüger
With the recent increase in sales figures of tablet computers, a corresponding boost of mobile media consumption on such devices can be observed. Even though tablets provide a comparably larger screen in contrast to smartphones -- which are taking the main share of mobile media consumption -- the experience of consuming media content on tablets is still limited. For movie theatres the notion of 4 dimensional (4D) films exist, describing a 3D movie that is enhanced with ambient feedback such as environmental light, air streams or moist. In this paper we present ambiPad, a tablet prototype that enriches mobile digital media with ambient feedback around the tablet's display. Besides a light emitting frame, ambiPad allows for thermal stimuli as well. We report on the results of a qualitative user study, in which the feedback channels of ambiPad were rated as highly attractive and desirable by the participants.

Gesture & text-entry

Cuenesics: using mid-air gestures to select items on interactive public displays BIBAFull-Text 299-308
  Robert Walter; Gilles Bailly; Nina Valkanova; Jörg Müller
Most of today's public displays only show predefined contents that passers-by are not able to change. We argue that interactive public displays would benefit from immediately usable mid-air techniques for choosing options, expressing opinions or more generally selecting one among several items. We propose a design space for hand-gesture based mid-air selection techniques on interactive public displays, along with four specific techniques that we evaluated at three different locations in the field. Our findings include: 1) if no hint is provided, people successfully use Point+Dwell for selecting items, 2) the user representation could be switched from Mirror to Cursor after registration without causing confusion, 3) people tend to explore items before confirming one, 4) in a public context, people frequently interact inadvertently (without looking at the screen). We conclude by providing recommendations for designers of interactive public displays to support immediate usability for mid-air selection.
AirAuth: evaluating in-air hand gestures for authentication BIBAFull-Text 309-318
  Md Tanvir Islam Aumi; Sven Kratz
Secure authentication with devices or services that store sensitive and personal information is highly important. However, traditional password and pin-based authentication methods compromise between the level of security and user experience. AirAuth is a biometric authentication technique that uses in-air gesture input to authenticate users. We evaluated our technique on a predefined (simple) gesture set and our classifier achieved an average accuracy of 96.6% in an equal error rate (EER-)based study. We obtained an accuracy of 100% when exclusively using personal (complex) user gestures. In a further user study, we found that AirAuth is highly resilient to video-based shoulder surfing attacks, with a measured false acceptance rate of just 2.2%. Furthermore, a longitudinal study demonstrates AirAuth's repeatability and accuracy over time. AirAuth is relatively simple, robust and requires only a low amount of computational power and is hence deployable on embedded or mobile hardware. Unlike traditional authentication methods, our system's security is positively aligned with user-rated pleasure and excitement levels. In addition, AirAuth attained acceptability ratings in personal, office, and public spaces that are comparable to an existing stroke-based on-screen authentication technique. Based on the results presented in this paper, we believe that AirAuth shows great promise as a novel, secure, ubiquitous, and highly usable authentication method.
MirrorTouch: combining touch and mid-air gestures for public displays BIBAFull-Text 319-328
  Jörg Müller; Gilles Bailly; Thor Bossuyt; Niklas Hillgren
In this paper we present a series of three field studies on the integration of multiple modalities (touch and mid-air gestures) in a public display. We analyze our field studies using Conversion Diagrams, an approach to model and evaluate usage of multimodal public displays. Conversion diagrams highlight the transitions inherent in a multimodal system and provide a systematic approach to investigate which factors affect them and how. We present a semi-automatic annotation technique to obtain Conversion Diagrams. We use Conversion Diagrams to evaluate interaction in the three field studies. We found that 1) clear affordances for touch were necessary when mid-air gestures were present. A call-to-action caused significantly more users to touch than a button (+200%), 2) the order of modality usage was different from what we designed for, and the location impacted which modality was used first, and 3) small variations in the application did lead to considerable user increase (+290%).
Probabilistic touchscreen keyboard incorporating gaze point information BIBAFull-Text 329-333
  Toshiyuki Hagiya; Tsuneo Kato
We propose a novel probabilistic keyboard that takes into account the distance between a gaze point and a touch position in order to improve typing efficiency. The proposed keyboard dynamically changes the size of the search space for predicting candidate words based on a model that estimates the magnitude of touch position errors according to the distance between the gaze point and the touch position. This makes it possible for users to type intended words even when they glance at different areas on the screen. Performance was evaluated in terms of input accuracy in total error rate (TER) and of typing speed in words per minute (WPM). The results showed that the proposed keyboard successfully reduced the TER by 18.2% and increased WPM by 12.7% compared to the conventional keyboard.
The inviscid text entry rate and its application as a grand goal for mobile text entry BIBAFull-Text 335-338
  Per Ola Kristensson; Keith Vertanen
We introduce the concept of the inviscid text entry rate: the point when the user's creativity is the bottleneck rather than the text entry method. We then apply the inviscid text entry rate to define a grand goal for mobile text entry. Via a proxy measure we estimate the population mean of the sufficiently inviscid entry rate to be 67 wpm. We then compare existing mobile text entry methods against this estimate and find that the vast majority of text entry methods in the literature are substantially slower. This analysis suggests the mobile text entry field needs to focus on methods that can viably approach the inviscid entry rate.
Texting while walking: an evaluation of mini-qwerty text input while on-the-go BIBAFull-Text 339-348
  James Clawson; Thad Starner; Daniel Kohlsdorf; David P. Quigley; Scott Gilliland
Interacting with mobile technology while in-motion has become a daily activity for many of us. Common sense leads one to believe that texting with a mini-qwerty keyboard while mobile can be dangerous since users are distracted and not paying attention to the environment. Previous studies have found that mobility negatively impacts text entry performance for novice participants typing on virtual keyboards on touch screen mobile phones. We investigate the impact of mobility on expert users' ability to quickly and accurately input text on mobile phones equipped with fixed-key mini-qwerty keyboards. In total, 36 participants completed 600 minutes of typing on mini-qwerty keyboards (300 minutes training up to expertise) in three mobility conditions (seated, standing, and walking) generating almost 4,000,000 characters across all conditions. Surprisingly, we found that walking has a significant impact on expert typing speeds but does not significantly impact expert accuracy rates.

Recommender systems and CSCW

ReviewCollage: a mobile interface for direct comparison using online reviews BIBAFull-Text 349-358
  Haojian Jin; Tetsuya Sakai; Koji Yatani
Review comments posted in online websites can help the user decide a product to purchase or place to visit. They can also be useful to closely compare a couple of candidate entities. However, the user may have to read different webpages back and forth for comparison, and this is not desirable particularly when she is using a mobile device. We present ReviewCollage, a mobile interface that aggregates information about two reviewed entities in a one-page view. ReviewCollage uses attribute-value pairs, known to be effective for review text summarization, and highlights the similarities and differences between the entities. Our user study confirms that ReviewCollage can support the user to compare two entities and make a decision within a couple of minutes, at least as quickly as existing summarization interfaces. It also reveals that ReviewCollage could be most useful when two entities are very similar.
Mobile interaction analysis: towards a novel concept for interaction sequence mining BIBAFull-Text 359-368
  Florian Lettner; Christian Grossauer; Clemens Holzmann
Identifying intentions of users when they launch an application on their smartphone, and understanding which tasks they actually execute, is a key problem in mobile usability analysis. First, knowing which tasks users actually execute is required for calculating common usability metrics such as task efficiency, error rates and effectiveness. Second, understanding how users perform these tasks is important for developers in order to validate designed interaction sequences for tasks (e.g. sequential steps required to successfully perform and complete a task). In this paper, we describe a novel approach for automatically extracting and grouping interaction sequences from users, assigning them to predefined tasks (e.g. writing an email) and visualising them in an intuitive way. Thus, we are able to find out if the designer's intention of how users should perform designed tasks, and how they actually execute them in the field, matches, and where it differs. This allows us to figure out if users find alternate ways of performing certain tasks, which contributes to the application design process. Moreover, if the users' perception of tasks differs from the designer's intention, we lay the foundation for recognising issues users may have while executing them.
FlexiGroups: binding mobile devices for collaborative interactions in medium-sized groups with device touch BIBAFull-Text 369-378
  Tero Jokela; Andrés Lucero
We present a touch-based method for binding mobile devices for collaborative interactions in a group of collocated users. The method is highly flexible, enabling a broad range of different group formation strategies. We report an evaluation of the method in medium-sized groups of six users. When forming a group, the participants primarily followed viral patterns where they opportunistically added other participants to the group without advance planning. The participants also suggested a number of more systematic patterns, which required the group to agree on a common strategy but then provided a clear procedure to follow. The flexibility of the method allowed the participants to adapt it to the changing needs of the situation and to recover from errors and technical problems. Overall, device binding in medium-sized groups was found to be a highly collaborative group activity and the binding methods should pay special attention to supporting groupwork and social interactions.
Ubi-jector: an information-sharing workspace in casual places using mobile devices BIBAFull-Text 379-388
  Hajin Lim; Hyunjin Ahn; Junwoo Kang; Bongwon Suh; Joonhwan Lee
The widespread use of mobile devices has transformed casual places into meeting places. However, in these places, it is uncommon to have shared information workspace such as a beam projector, making it inconvenient and inefficient to exchange information and to get a direct feedback. To address this challenge, we present Ubi-jector, a mobile system that provides a shared information space that is equally distributed to each participant's mobile device and allows group members to share documents and collaborate real-time. We first characterized the information sharing patterns and identified the limitations of the current practice in meeting places without a shared workspace, by conducting qualitative user studies. Next, we implemented Ubi-jector with the design guidelines drawn in the prior stage. Also, we performed an evaluation study that showed the possibilities of Ubi-jector to facilitate an effective information sharing and foster an active participation even in poorly-equipped environments.
CoSMiC: designing a mobile crowd-sourced collaborative application to find a missing child in situ BIBAFull-Text 389-398
  Hyojeong Shin; Taiwoo Park; Seungwoo Kang; Bupjae Lee; Junehwa Song; Yohan Chon; Hojung Cha
Finding a missing child is an important problem concerning not only parents but also our society. It is essential and natural to use serendipitous clues from neighbors for finding a missing child. In this paper, we explore a new architecture of crowd collaboration to expedite this mission-critical process and propose a crowd-sourced cooperative mobile application, CoSMiC. It helps parents find their missing child quickly on the spot before he or she completely disappears. A key idea lies in constructing the location history of a child via crowd participation, thereby leading parents to their child easily and quickly. We implement a prototype application and conduct extensive user studies to assess the design of the application and investigate its potential for practical use.

Doctoral consortium

Effective multimodal feedback for around-device interfaces BIBAFull-Text 399-400
  Euan Freeman
Around-device interaction lets people use their phones without having to pick them up or reach out and touch them. This allows interaction when touch may not be available; for example, users could gesture to interact with their phones while cooking, avoiding touching the screen with messy hands. Well-designed feedback helps users overcome uncertainty during gesture interaction, however giving effective feedback from small devices can be a challenge and detailed visual feedback will not always be suitable. My thesis research looks at how other types of feedback may be used effectively during around-device interaction to help users.
Enhanced situational awareness and communication for emergency response BIBAFull-Text 401-402
  Minoo Erfani Joorabchi
It is important to recognize emergency responders? needs during emergency response to workplace incidents in order to propose solutions to facilitate responders? tasks and resolve incidents in a timely manner. To understand emergency response procedures involved in workplace incidents, we interviewed McGill Safety personnel. We identify potential areas of improvement in their procedures, which help us to shape research objectives. We then propose methods to address these objectives. We focus our investigation on a spill scenario and briefly describe our mobile solution to assist in this use context.
Email management and work-home boundaries BIBAFull-Text 403-404
  Marta E. Cecchinato
In my PhD research I am exploring the effect of email on work-home boundaries. The ultimate goal is to design a tool that helps people manage their email better and reduces the stress associated with this activity. I argue that this will require understanding individual differences in email behaviours and how email can impact work-home boundaries.
The effects of encumbrance on mobile interactions BIBAFull-Text 405-406
  Alexander Ng
The popularity of touchscreen mobile devices gives users a variety of useful apps and functionality on the move. As a result, mobile devices are being used in a range of different contexts. One common scenario that has received little attention from researchers is the effects of encumbrance carrying objects (for example, shopping bags and personal gear) and interacting with mobile devices at the same time. This is a frequent everyday situation and one that can cause interaction difficulties [3]. There is a lack of knowledge of the impact encumbrance has on interaction therefore the usability issues in these physically and mentally demanding contexts are unknown. Prior to the start of our research, there was only one related study that has examined input performance with handheld devices while multitasking with different types of objects [7]. A better understanding of the interaction problems caused by encumbrance would allow researchers to develop more effective input techniques on touchscreen mobile devices.
Co-design of cognitive telerehabilitation technologies BIBAFull-Text 407-408
  Tuck-Voon How
Emerging mobile technologies enable new forms of interaction in the context of our everyday life -- this has profound implications for delivering brain injury rehabilitation at a distance through technology, or cognitive telerehabilitation. However, a challenge lies in designing these technologies, as they require both clinical and technical expertise. This research aims to address this challenge through a co-design approach that brings together mobile technology experts and rehabilitation practitioners.
Adaption of usability evaluation methods for native smartphone applications BIBAFull-Text 409-410
  Ger Joyce
Research has shown that traditional usability evaluation methods cannot be readily applied to the evaluation of native smartphone applications. This research investigates this issue by adapting two usability evaluation methods, applying each at different stages of the design life cycle. Both adapted methods, when combined as a framework, may help in the design of more usable native smartphone applications.
Body-worn sensors for remote implicit communication BIBAFull-Text 411-412
  Jeffrey R. Blum
Although there has been a great deal of work on using machine learning algorithms to categorize user activity (e.g., walking, biking) from accelerometer and other sensor data, less attention has been focused on relaying such information in real-time for remote implicit communication. Humans are very familiar with using both explicit and implicit communication with others physically near to us, for example, by using body language and modulating our voice tone and volume. Likewise, remote explicit communication has also existed for a long time in the form of phone calls, text messages, and other mechanisms for communicating explicitly over great distances. However, remote implicit communication between mobile users has been less explored, likely since there have been limited avenues for collecting such information and rendering it in a meaningful way while we go about our daily lives.


Urgent mobile tool for hearing impaired, language dysfunction and foreigners at emergency situation BIBAFull-Text 413-416
  Naotsune Hosono; Hiromitsu Inoue; Miwa Nakanishi; Yutaka Tomita
This paper introduces a mobile application that allows deaf, language dysfunctioned, or non-native language users to report emergencies. An earlier version (booklet) was designed for hearing impaired person to be able to communicate with others without speaking. The current smart phone application allows calls to be made from a remote location. The screen transitions application follows the dialogue models used by emergency services. Users interact with the dialogues by tapping on icons or pictograms instead of using text messages. Evaluation by deaf people and a non-native speaker found that it was about three times quicker to report an emergency using this tool than it was by using text messages.
JuxtaPinch: an application for collocated multi-device photo sharing BIBAFull-Text 417-420
  Heidi Selmer Nielsen; Marius Pallisgaard Olsen; Mikael B. Skov; Jesper Kjeldskov
We have developed an application called JuxtaPinch that allows users to share photos on multiple devices, i.e. mobile phones and tablets, while being collocated. JuxtaPinch employs simple and intuitive interaction techniques, e.g. pinching to connect devices, and it enables flexible physical positioning of devices and supports partial photo viewing. JuxtaPinch further enables users to use their own devices and access photos stored on own devices. In the Interactivity session, audience members can explore and view photos with friends and colleagues using different devices and experience defamiliarization and playful interaction with the photos -- aspects that we have uncovered during lab and field studies of JuxtaPinch.
The realization of new virtual forest experience environment through PDA BIBAFull-Text 421-424
  Kana Muramatsu; Hiroki Kobayashi; Junya Okuno; Akio Fujiwara; Kazuhiko Nakamura; Kaoru Saito
Currently, human-computer interaction (HCI) is primarily focused on human-centric interactions; however, people experience many nonhuman-centric interactions during the course of a day. Interactions with nature, such as the experience of walking through a forest or unexpected encounter with wildlife, can imprint the beauty of nature in our memories. In this context, the present paper considers an experimental interface of such nonhuman interface for various PDA application designs through an imaginable interaction with nature. A virtual forest experience environment on PDA is made more realistic through two subsystems: "Panorama-viewer of forest" and "Remote animal sensing". The former is an application by which users can look out over the forest landscape in all directions with gyro-sensor inside PDA. The latter is a virtual system allowing users living in remote urban areas to experience interaction with wild deer in a forest virtually in real time. This novel design means that users can realize a forest experience through the PDA in their hands.
Back-of-device authentication with bod taps and bod shapes BIBAFull-Text 425
  Alejandro Catala; Luis A. Leiva
This demonstration accompanies a paper accepted at MobileHCI'14. Back-of-device (BoD) authentication has shown to be significantly more secure than standard front-facing approaches, being BoD Shapes the most representative method found in the literature. With the aim of getting a better understanding and improving its usage, we developed BoD Taps as a novel alternative. Our experiments revealed that BoD Taps and BoD Shapes perform equally good at unlocking the device, but BoD Taps allows users to enter passwords about twice faster. Moreover, BoD Taps was perceived as being more usable and less frustrating than BoD Shapes. This demonstration showcases both authentication methods in action, aimed at comparing and discussing their features and potential improvements.
SqueezeDiary: using squeeze gesture as triggers of diary events BIBAFull-Text 427-429
  Ming Ki Chong; Umar Rashid; Jon Whittle; Chee Siang Ang
The diary method has been adopted for recording participants' behaviours. However, recording diary entries can be difficult or deemed inappropriate in certain situations, like in a social group or in a meeting. In this demo we present SqueezeDiary, a tool that adopts squeeze gestures for users to denote triggers of diary events, and the users reflect on the triggers later when they are not busy (e.g. during lunch). Our application enables delayed reflection, where users can reflect on their recorded event instances retrospectively during their downtime.
Off the couch and out of the hospital, mobile applications for acceptance and commitment therapy BIBAFull-Text 431-434
  Marjan Verstappen; Paula Gardner; Dora Poon; Tim Bettridge
This paper describes the research and concept development process involved in designing a mobile app for depressed youth learning to practice mindfulness based psychotherapy entitled Acceptance and Commitment Therapy (ACT) program at Trillium Healthcare Centre in Toronto. Our process involved identifying aspects of preexisting mobile applications for mindfulness that may be discouraging for youth with depression and devising strategies to overcome these negative messages within the ACT Application. We propose to present a working prototype of mobile application at the Mobile HCI conference.
Wearable haptic gaming using vibrotactile arrays BIBAFull-Text 435-438
  Adam Tindale; Michael Cumming; Hudson Pridham; Jessica Peter; Sara Diamond
In this paper we explore the design, layout and configuration of wrist-wearable, haptic gaming interfaces, which involve visual and vibrotactile spatial and temporal patterns. Our goal is to determine overall layouts and spatial and temporal resolutions on the wrist suitable for interactive tactile stimuli. Our approach is to first explore the simplest of configurative patterns that are intended to encircle the wrist, and then study their affordances. We describe various informal user studies we have employed to explore and test issues that arose.
Mobile experience lab: body editing BIBAFull-Text 439-442
  Stephen Surlin; Paula Gardner
Body Editing is an interactive installation that combines depth sensing (Kinect 3D camera), biometric sensors, musical performance and abstract drawing software to create a mobile wireless interface that sonically and graphically represents the user's motion in space. The wireless nature of this gesture-controlled interface is an explicit attempt to create embodied experiences that encourages users to be more aware of their body through movement and audio/visual feedback and less focused on technological augmentation.
Non-verbal communications in mobile text chat: emotion-enhanced mobile chat BIBAFull-Text 443-446
  Jackson Feijó Filho; Thiago Valle; Wilson Prata
Great part of human communication is carried out non-verbally. All this information is lost in mobile text messaging. This work describes an attempt to augment text chatting in mobile phones by adding automatically detected facial expression reactions, to conversations. These expressions are detected using known image processing techniques. Known related work, concerning the investigation of non-verbal communication through text messaging are considered and distinguished from the present solution. The conception and implementation of a mobile phone application with the debated feature is described and user studies are narrated. Finally, context of application, conclusions and future work are also discussed.

Poster Presentations

Changed shape of key: an approach to enhance the performance of the soft keyboard BIBAFull-Text 447-452
  Hsi-Jen Chen
Text entry using a soft keyboard on small mobile devices is difficult, one reason being that there is often an offset when typing. This essay presents a soft keyboard whose key shapes have been changed in order to avoid the problem of offset. An app and a usability test prove that this soft keyboard with a changed shape of the keys can increase words per minute and reduce the error rate. Another finding is that a large space between keys is preferred.
Ergonomic characteristics of gestures for front- and back-of-tablets interaction with grasping hands BIBAFull-Text 453-458
  Katrin Wolf; Robert Schleicher; Michael Rohs
The thumb and the fingers have different flexibility, and thus, gestures performed on the back of a held tablet are suggested to be different from ones performed on the touchscreen with the thumb of grasping hands. APIs for back-of-device gesture detection should consider that difference. In a user study, we recorded vectors for the four most common touch gestures. We found that drag, swipe, and press gestures are significantly differently when executed on the back versus on the front side of a held tablet. Corresponding values are provided that may be used to define gesture detection thresholds for back-of-tablet interaction.
Towards usable and acceptable above-device interactions BIBAFull-Text 459-464
  Euan Freeman; Stephen Brewster; Vuokko Lantz
Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.
A comparison of location search UI patterns on mobile devices BIBAFull-Text 465-470
  Sebastian Meier; Frank Heidmann; Andreas Thom
This poster is looking at how users utilize mobile applications that offer an interface for finding locations and how the way of interaction changes depending on the users' intent. Through the analysis of existing interfaces we identified 5 location search patterns. In a further evaluation of the existing patterns we tried to identify which patterns serve which users' demand for information. In a goal directed pilot study we were able to gain a first insight into the correlations of specific user requirements and location search patterns.
GUIDES: a graphical user identifier scheme using sketching for mobile web-services BIBAFull-Text 471-476
  Yusuke Matsuno; Hung-Hsuan Huang; Yu Fang; Kyoji Kawagoe
In this paper, a novel graphical user identifier scheme for web-based mobile services. Despite a rapid progress in biological authentication technologies for user identification, the traditional text-based UserID-and-password scheme has still widely been used even for mobile web-services. In the mobile usage, a virtual keyboard is difficult to use for input texts, which causes much time to input a text. To decrease the difficulty, GUIDES (Graphical User IDEntifier using Sketching) is proposed in this paper. With GUIDES, a user can input its user identifier, called GUID (Graphical User ID), by drawing a sketch on a mobile touchscreen device. From our experiments, it is concluded that GUIDES can enable a user input his/her User-ID more efficiently and remember it more, compared with the text-based user-id scheme.
MuGIS multi-user geographical information system BIBAFull-Text 477-482
  Sebastian Schöffel; Johannes Schwank; Achim Ebert
Collaboration between users of a system is often a crucial factor for reaching given goals in an effective and efficient way. However, in many application domains, the current systems do not sufficiently support collaborative work (sometimes they even don't support it at all). One good example is geographical information systems (GIS), that usually only follow an one user at one time approach. In this paper, we present the development of a scalable Multi-user Geographical Information System (MuGIS). With MuGIS it is now possible to integrate large display environments with mobile smart devices for remote control. The system is deployed as a client-server architecture. It uses the NASA World Wind Java framework and SOAP web services for communication. On the client side, all common mobile smart devices are supported. The underlying concept provides different user roles and multi-user identification.
AudioTorch: using a smartphone as directional microphone in virtual audio spaces BIBAFull-Text 483-488
  Florian Heller; Jan Borchers
Mobile audio augmented reality systems can be used in a series of applications, e.g., as a navigational aid for visually impaired or as audio guide in museums. The implementation of such systems usually relies on head orientation data, requiring additional hardware in form of a digital compass in the headphones. As an alternative we propose AudioTorch, a system that turns a smartphone into a virtual directional microphone. This metaphor, where users move the device to detect virtual sound sources, allows quick orientation and easy discrimination between proximate sources, even with simplified rendering algorithms. We compare the navigation performance of head orientation measurement to AudioTorch. A lab study with 18 users showed the rate of correctly recognized sources to be significantly higher with AudioTorch than with head-tracking, while task completion times did not differ significantly. The presence in the virtual environment received similar ratings for both conditions.
Wearable remote control of a mobile device: comparing one- and two-handed interaction BIBAFull-Text 489-494
  Jessica Speir; Rufino R. Ansara; Colin Killby; Emily Walpole; Audrey Girouard
While wearable technologies are suitable for remotely controlling mobile devices, few studies have examined user preferences for one- or two-handed touch interaction with these wearables, especially when worn on the wrist and hand area. As these locations are recognized as socially acceptable and preferred by users, we ran a study of touch interaction to remotely control mobile devices. Our results suggest users prefer swipe gestures over touch gestures when interacting with wearables on the wrist or hand, and that users find both one- and two-handed interactions suitable for wearable remote controls.
Interactive opinion polls on public displays: studying privacy requirements in the wild BIBAFull-Text 495-500
  Matthias Baldauf; Stefan Suette; Peter Fröhlich; Ulrich Lehner
Interactive opinion polls are a promising novel use case for public urban displays. However, voicing one's opinion at such a public installation poses special privacy requirements. In this paper, we introduce our ongoing work on investigating the roles of the interaction technique and the poll question in this novel context. We present a field study comparing three different voting techniques (public touch interface, personal smartphone by scanning a QR code, from remote through a short Web address) and three types of poll questions (general, personal, local). Overall, the results show that actively casting an opinion on a timely topic is highly appreciated by passers-by. The public voting opportunity through a touch screen is clearly preferred. Offering mobile or remote voting does not significantly increase the overall participation rate. The type of poll question has an impact on the number of participants but does not influence the preferred interaction modality.
Toward a non-intrusive, physio-behavioral biometric for smartphones BIBAFull-Text 501-506
  Esther Vasiete; Yan Chen; Ian Char; Tom Yeh; Vishal Patel; Larry Davis; Rama Chellappa
Biometric authentication relies on an individual's inner characteristics and traits. We propose an active authentication system on a mobile device that relies on two biometric modalities: 3D gestures and face recognition. The novelty of our approach is to combine 3D gesture and face recognition in a nonintrusive and unconstrained environment; the active authentication system is running in the background while the user is performing his/her main task.
A design framework of branded mobile applications BIBAFull-Text 507-512
  Zhenzhen Zhao; Christine Balagué
Mobile phone applications have received intensive attention by marketers due to the high engagement of users and its positive persuasive impact on brand. However, how can companies get on the right track of designing branded apps? Little research has been done on the identification of the elements which can be used to design branded apps strategy. Our research aims to offer a design framework of branded apps by identifying constructs from the perspective of company, user and technology respectively. By evaluating 84 mobile apps from top 11 FMCG (Fast Moving Consumer Goods) brands, we examine the usage of mobile interaction, social interaction and brand interaction in current branded apps design.
Keep an eye on your photos: automatic image tagging on mobile devices BIBAFull-Text 513-518
  Nina Runge; Dirk Wenig; Rainer Malaka
In this paper we present how to tag images automatically based on the image and sensor data from a mobile device. We developed a system that computes low-level tags using the image itself and meta data. Based on these tags and previous user tags we learn high-level tags. With a client-server-implementation we source out computational expensive algorithms to recommend the tags as fast as possible. We show what are the best feature extraction methods in combination with a machine learning technique to recommend good tags.
Gravity: automatic location tracking system between a car and a pedestrian BIBAFull-Text 519-524
  Changhoon Oh; Jeongsoo Park; Bongwon Suh
The use of car navigation system is very common nowadays. Most of the car navigation services are based on turn-by-turn instructions and distance calculations. Academic research in this field has focused on evaluating basic usability. However, such products and studies have not covered users' various needs that arise in specific driving situations. For example, in a complex city space, drivers often face burdensome problems, especially when picking up pedestrians. We conducted a semistructured online survey asking specific problems, work-arounds, and their suggestions in picking-up situations. We grouped responses into several issue points based on their similarities and induced design implications for a car navigation system supporting picking-up situations. Through this user-centered design approach, we developed "Gravity -- Automatic Location Tracking System between a Car and a Pedestrian" as a prototype and evaluated its usability, and we received favorable feedback.
Shake 'n' Tap: a gesture enhanced keyboard for older adults BIBAFull-Text 525-530
  Mark Dunlop; Andreas Komninos; Emma Nicol; Iain Hamiliton
The need for text entry on smartphones and other touch-screen devices is key for many tasks and also a key factor in the usability of these devices. Physical and cognitive issues associated with age can aggravate the task of text entry for older adults. Technological exclusion due to low usability can present a significant problem both for social and ongoing business-related tasks with older adults. This paper investigates a new touch-screen keyboard design for older adults that combines the familiar QWERTY keyboard layout with physical gesture. User studies with older adults showed our keyboard reduced miss-taps, but was slower to use, and raised issues for further research.
Augmenting bend gestures with pressure zones on flexible displays BIBAFull-Text 531-536
  Rufino Ansara; Audrey Girouard
Flexible displays have paved the road for a new generation of interaction styles that allow users to bend and twist their devices. We hypothesize that bend gestures can be augmented with "hot-key" like pressure areas. This would allow single corner bends to have multiple functions. We created three pressure and bend interaction styles and compared them to bend-only gestures on two deformable prototypes. Users preferred the bend only prototype but still appreciated the pressure & bend prototype, particularly when it came to the lock/unlock application. We found that pressure interaction is a poor replacement for touch interaction, and present design suggestions to improve its performance.
A systematic comparison of 3 phrase sampling methods for text entry experiments in 10 languages BIBAFull-Text 537-542
  Germán Sanchis-Trilles; Luis A. Leiva
Today's reference datasets for conducting text entry experiments are only available in English, which may lead to misleading results when testing text entry methods with non-native English speakers. We compared 3 automated phrase sampling methods available in the literature: Random, Ngram, and MemRep. It was found that MemRep performs best according to a statistical analysis and qualitative observations. This resulted in a collection of 30 datasets across 10 major languages, and we wish to share them with the community via this paper.
Developing tactile feedback for wearable presentation: observations from using a participatory approach BIBAFull-Text 543-548
  Flynn Wolf; Ravi Kuber
In this paper, we describe a participatory-based approach to developing tactile feedback for a head-mounted device. Three focus groups iteratively designed and evaluated tactile interaction concepts for user-generated use-case scenarios. Results showed productive design insights from the groups regarding approaches to tactile coding schemes addressing the scenario conditions, as well as method-innovations to participatory design techniques for interaction development in unfamiliar sensory modalities such as touch. The study has culminated in the development of a library of tactile icons relating to spatial concepts, which will be tested as part of future work.
"People don't bump": sharing around mobile phones in close proximity BIBAFull-Text 549-554
  Afshan Kirmani; Rowanne Fleck
A large body of mobile phone sharing research focuses on creating new interaction techniques for sharing, and considers the usability of such applications and features whilst ignoring the context of their use, their adoption or appropriation. Therefore it is not known whether these technologies are used in practice or whether they really meet people's sharing needs. The aim of this research was to understand current real-world user sharing practices around mobile smart phones through the use of a diary study with 63 participants. We focused on close proximity sharing and discovered that new technologies to support this kind of sharing, for example bumping handsets together to exchange files, are not being widely used. More than half of all sharing via phones in this sample involved only telling, showing or passing the phone, though this often triggered further sharing. Possible explanations for this and their implications are discussed.
Mobile designs to support user experiences of energy conservation in hotels BIBAFull-Text 555-560
  Xiying Wang; Susan R. Fussell
In the U.S., hotels are a heavy energy-consuming sector. Yet they sometimes prioritize customer satisfaction over lower energy use. In this study, we discuss how mobile designs can help to motivate energy-saving behaviors while maintaining users' satisfaction. We conducted a diary study of 13 participants and an interview study of 20 participants to understand user experiences around energy consumption in hotel and motel rooms. We found that people unnecessarily consume energy because they are unfamiliar with the hotel room environment and they want to be catered to by the hotel. We suggest two mobile design ideas: location-based energy conservation, and blending lights, and discuss the opportunities of mobile designs to balance personal controls and automation to support user experience and satisfaction yet decrease energy use in hospitality settings.

Interactive tutorials

Mobile-based tangible interaction techniques for shared displays BIBAFull-Text 561-562
  Ali Mazalek; Ahmed Sabbir Arif
This tutorial explores the possibility of using touchscreen-based mobile devices as active tangibles on an interactive tabletop surface. The tutorial starts with an open discussion about various aspects of tangible interaction, including an overview of different approaches and design principles. It then guides participants through the design and development of innovative interaction techniques, where mobile phones are used as active tangibles on a shared tabletop display. The intent is to encourage the mobile HCI community to further explore the possibility of using everyday devices such as mobile phones as tangibles.
Wearable computing: a human-centered view of key concepts, application domains, and quality factors BIBAFull-Text 563-564
  Vivian Genaro Motti; Spencer Kohn; Kelly Caine
This tutorial presents a human-centered view of the state-of-the-art of wearable computing. Considering scientific and industrial aspects, it provides key definitions in the domain, goes through practical applications and use case scenarios, and concludes with quality factors and best design practices. An interactive component will aid participants to apply the theoretical concepts presented.
Mobile health: beyond consumer apps BIBAFull-Text 565-566
  Jill Freyne
The explosion of apps for the medical and wellness sectors has been noted by many. Consumer apps, which provide innovative solutions for self management for a range of health problems have flooded the market, due to high consumer demand. More recently we have seen increased presence of truly medical applications for clinical professionals and patients of serious conditions. In the majority of cases consumer based mHealth apps allow people to do old things in new ways, such as recording health measures digitally rather than on paper. Medical apps, aimed at increasing the quality and efficiency of existing health care delivery models, provide clinical staff with convenient tools and easy to access resources and communication mechanisms. Finally, in rare and exciting cases we are seeing mHealth applications that are doing things in entirely new ways to drive real innovation in health care delivery through mobile devices. This tutorial will inform participants about the breadth of mHealth applications that are transforming the Health Services sector and put forward a strong case for HCI and efficacy research.
Speech-based interaction: myths, challenges, and opportunities BIBAFull-Text 567-568
  Cosmin Munteanu; Gerald Penn
Human-Computer Interaction (HCI) research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines. This is largely due to speech being the highest-bandwidth communication channel we possess. As such, significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent during the past several decades on improving machines' ability to understand speech. Yet, the MobileHCI community (and HCI in general) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.
   The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that MobileHCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.

Workshop summaries

People, places and things: a mobile locative mapping workshop BIBAFull-Text 569-572
  Martha Ladly; Bryn Ludlow; Guillermina Buzio
Personal digital technologies have become the tools of reproduction for personal narration and broad cultural critique. Mobile social media enables individuals to function as storytellers and public commentators, with practices that offer an explicit engagement between people, places, and things. Mobile technologies and mapping tools enable direct connection to place, involving local communities and public dissemination. Mobile narratives and their creators can also make a larger contribution to collective memories of place, speaking back to aspects of culture at large.
PID-MAD 2014: second international workshop on prototyping to support the interaction designing in mobile application development BIBAFull-Text 573-576
  Shah Rukh Humayoun; Steffen Hess; Achim Ebert; Yael Dubinsky
The current mobile paradigm is in many ways fundamentally different from the conventional desktop paradigm due to many factors such as multi-touch gesture interaction, usage of sensors, single-task focused model, etc. These factors mean several new challenges for the interaction designers with regard to communicating their ideas and thoughts during early design activities, which they might be unable to tackle properly using traditional prototyping techniques. Therefore, we envision that research must address the need for a change in existing prototyping techniques as well as focus on novel prototyping approaches and frameworks that would support not only the interaction design process but also the whole development process of mobile app development. In the footsteps of the first workshop, PID-MAD 2014 provides a platform to the interested communities for discussing issues and will bring together researchers and practitioners for sharing their knowledge and experience in order to tackle the upcoming challenges.
Enhancing self-reflection with wearable sensors BIBAFull-Text 577-580
  Genovefa Kefalidou; Anya Skatova; Michael Brown; Victoria Shipp; James Pinchin; Paul Kelly; Alan Dix; Xu Sun
Advances in ubiquitous technologies have changed the way humans interact with the world around them. Technology has the power not only to inform and perform but also to further peoples' experiences of the world. It has enhanced the methodological approaches within the CHI research realm in terms of data gathering (e.g. via wearable sensors) and sharing (e.g. via self-reflection methods). While such methodologies have been mainly adopted in isolation, exploring the implications and the synergy of them has yet to be fully explored. This workshop brings together a multidisciplinary group of researchers to explore and experience the use of wearable sensors with self-reflection as a multi-method approach to conduct research and fully experience the world on-the-go.
Socio-technical practices and work-home boundaries BIBAFull-Text 581-584
  Anna L. Cox; Jon Bird; Natasha Mauthner; Susan Dray; Anicia Peters; Emily Collins
Recent advances in mobile technology have had many positive effects on the ways in which people can combine work and home life. For example, having remote access enables people to work from home, or work flexible hours that fit around caring responsibilities. They also support communication with colleagues and family members, and enable digital hobbies. However, the resulting 'always-online' culture can undermine work-home boundaries and cause stress to those who feel under pressure to respond immediately to digital notifications. This workshop will explore how a socio-technical perspective, which views boundaries as being constituted by everyday socio-technical practices, can inform the design of technologies that help maintain boundaries between work and home life.
Re-imagining commonly used mobile interfaces for older adults BIBAFull-Text 585-588
  Emma Nicol; Mark Dunlop; Andreas Komninos; Marilyn McGee-Lennon; Lynne Baillie; Alistair Edwards; Parisa Eslambolchilar; Joy Goodman-Deane; Lilit Hakobyan; Jo Lumsden; Ingrid Mulder; Patrick Rau; Katie Siek
Many countries have an increasingly aging population. In recent years, mobile technologies have had a massive impact on social and working lives. As the size of the older user population rises, many people will want to continue professional, social and lifestyle usage of mobiles into 70s and beyond. Mobile technologies can lead to increased community involvement and personal independence. While mobile technologies can provide many opportunities, the aging process can interfere with their use. This workshop brings together researchers who are re-imagining common mobile interfaces so that they are more suited to use by older adults.
Workshop on designing the future of mobile healthcare support BIBAFull-Text 589-592
  Sara Diamond; Bhuvaneswari Arunachalan; Derek Reilly; Anne Stevens
This workshop aims to discuss and develop ideas on how healthcare services, mobile technologies, and visual analytics techniques can be leveraged and contribute to new ways of mobile healthcare supportive system designs. Designing contemporary mobile support systems for healthcare support requires a clear understanding of information requirements, behaviors and basic needs of users. Design must take into account the challenges of human-device interactions in the healthcare environment; the extension of the care environment beyond the institutional setting and the engagement of patients, facility residents and families in an extended circle of care; and issues of formal and informal data sharing and privacy. This workshop invites researchers and designers working in relevant fields to discuss, compare, and demonstrate effective design approaches that can be adopted to improve the designs of mobile support systems for interactive visualization in healthcare.

Design competition & future innovations

NARRATIVES: geolocative cinema application BIBAFull-Text 593-595
  Solomon Nagler; Andrew Hicks; Michael Hackett; Katja Zachkarko
A tool for filmmakers and spectators, Narratives looks at how interactive technology can transform cinematic experience, augmenting the interactivity of traditional cinematic form with locative media practices that expand storytelling into traversable spaces.
The switchboard: a virtual proprioceptive training and rehabilitation device BIBAFull-Text 597-599
  Edgar Rodriguez; Kah Chan; Sarah Hadfield
Swibo Limited is currently engaging in the product development of the Switchboard in order to increase motivation of patients requiring proprioceptive awareness and balance training. This paper summarises the findings made since project inception in November 2013.
Time tremors: developing transmedia gaming for children BIBAFull-Text 601-603
  Conor Holler; Patrick Crowe; Alex Mayhew; Adam Tindale; Sara Diamond
Time Tremors is a transmedia experience for children aged 8-14 that crosses television, web, locative media, and mobile apps. Time Tremors is a collection game in which players search for objects from history supposedly scattered throughout time and space, hidden, invisible to the human eye but detectable and collectable using a variety of mobile and online broadband technologies. Extending the game into locative augmented reality and mobile play was an applied research challenge that required narrative continuity while ensuring safe play.
CoPerformance: a rapid prototyping platform for developing interactive artist-audience performances with mobile devices BIBAFull-Text 605-607
  Bohdan Anderson; Symon Oliver; Patricio Davila
How can mobile technology create new models for audience participation in live performances? CoPerformance is a research project that aims to develop a set of plug-and-play participatory performance modules. These modules will allow designers to quickly build/test interactive experiences that utilize mobile devices. CoPerformance can be deployed via web browsers or native applications. The goal of this platform is to use existing frameworks to offer designers a powerful set of tools, templates, and scripts for interactive performances; and decrease the barriers for building participatory performances.
Phonorama: mobile spatial navigation by directional stereophony BIBAFull-Text 609-611
  Michael Markert; Jens Heitjohann; Jens Geelhaar
Phonorama is a sonar-like mobile application that uses audio hotspots bound to specific geo-locations. Decreasing the distance to the audio source increases the volume. The addition of direction indicators enable rich and immediate, immersive audio-only spatial explorations: The direction of the audio hotspot source relative to the user's location is represented by a realtime stereo-panning (Directional Stereophony). If the audio source is left of the user, the volume on the left headphone speaker is louder than the volume on the right speaker. While the software has been created for audio guides in the context of media arts, it might be useful for all kinds of acoustic navigations, e.g. information hotspots in museums and public space, navigational aid, assistive technologies, social networking or localized advertisements.

Industrial case studies

Experimenting on the cognitive walkthrough with users BIBAFull-Text 613-618
  Wallace Lira; Renato Ferreira; Cleidson de Souza; Schubert Carvalho
This paper presents a case study aiming to investigate which variant of the Think-Aloud Protocol (i.e., the Concurrent Think-Aloud and the Retrospective Think-Aloud) better integrates with the Cognitive Walkthrough with Users. To this end we performed a case study that involved twelve users and one usability evaluator. Usability problems uncovered by each method were evaluated to help us understand the strengths and weaknesses of the studied usability testing methods. The results suggest that 1) the Cognitive Walkthrough with Users integrates equally well with both the Think-Aloud Protocol variants; 2) the Retrospective Think-Aloud find more usability problems and 3) the Concurrent Think-Aloud is slightly faster to perform and was more cost effective. However, this is only one case study, and further research is needed to verify if the results are actually statistically significant.
Logging an visualization of touch interactions on teach pendants BIBAFull-Text 619-624
  Clemens Holzmann; Florian Lettner; Christian Grossauer; Werner Wetzlinger; Paul Latzelsperger; Christian Augdopler
In industrial automation, there is a growing use of touch panels for controlling and programming machines. Compared to traditional panels with physical buttons, they provide higher flexibility and operating efficiency. A big challenge in the user interface design is their usability, which is directly related to the operator's safety and performance. In this paper, we present a software solution for the acquisition and visualization of user interaction data on teach pendants, which are handheld terminals for teaching robot positions. Interaction data include touch coordinates and navigation sequences, and they are visualized with a heatmap and a graph view, respectively. The design and implementation of the software is based on interviews which we conducted with companies from the automation industry. The painless integration of our software by manufacturers, together with the automated recording and visualisation of user interactions, allow for cost-efficient usability analysis of handheld terminals.
Polly: "being there" through the parrot and a guide BIBAFull-Text 625-630
  Sven Kratz; Don Kimber; Weiqing Su; Gwen Gordon; Don Severns
Telepresence systems usually lack mobility. Polly, a wearable telepresence device, allows users to explore remote locations or experience events remotely by means of a person that serves as a mobile "guide". We built a series of hardware prototypes and our current, most promising embodiment consists of a smartphone mounted on a stabilized gimbal that is wearable. The gimbal enables remote control of the viewing angle as well as providing active image stabilization while the guide is walking. We present qualitative findings from a series of 8 field tests using either Polly or only a mobile phone. We found that guides felt more physical comfort when using Polly vs. a phone and that Polly was accepted by other persons at the remote location. Remote participants appreciated the stabilized video and ability to control camera view. Connection and bandwidth issues appear to be the most challenging issues for Polly-like systems.
UX suite: a touch sensor evaluation platform BIBAFull-Text 631-636
  Justin Mockler
We present UX Suite, which is a set of free testing tools that can be downloaded and used to evaluate the touch sensor performance of Android devices. The tools in the application focus on evaluating touchscreen sensor quality and human performance on mobile devices. Current tests include the Fitts' test, which is one of the most widely accepted predictive models for testing the accuracy of target selection, and an accidental activation test. Additional tests will be included in the future that will measure other characteristics of the touch sensor. This paper discusses the application and a user study that was conducted to assess the application's usability. The results of this study show that users are able and willing to complete the tests in the application without a moderator. Due to the remote testing features, this will allow large, distributed user studies to be run with little oversight from researchers.
Design and field testing of a system for remote monitoring of sea turtle nests BIBAFull-Text 637-642
  Thomas Zimmerman; Britta Muiznieks; Eric Kaplan; Samuel Wantman; Lou Browning; Eric Frey
Protective barriers are often deployed to safely guide sea turtle hatchlings to the ocean but the barriers are contentious for they impede foot and vehicle traffic of beach users. Our goal is to develop a wireless system that can detect sea turtle hatching to enable "just in time deployment" of barriers, and notify tourists so they can watch sea turtles emerge from nests, generating awareness and compassion for these endangered species. We report on the development of a sensor and data acquisition system capable of detecting nest hatching activity in the harsh environment of a beach, including sensor selection, wireless networking, low-power design, data processing and environmental packaging. Field tests confirm the suitability of a low-cost low-power three-axis accelerometer and mobile networks for detecting and remotely communicating sea turtle hatching activity.