HCI Bibliography Home | HCI Conferences | SOUPS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
SOUPS Tables of Contents: 0506070809101112131415

Proceedings of the 2009 Symposium on Usable Privacy and Security

Fullname:Symposium on Usable Privacy and Security
Editors:Lorrie Faith Cranor
Location:Mountain View, California
Dates:2009-Jul-15 to 2009-Jul-17
Publisher:ACM
Standard No:ISBN 1-60558-736-2, 978-1-60558-736-3; ACM DL: Table of Contents hcibib: SOUPS09
Papers:54
Pages:205
Links:Conference Home Page
  1. Mental models
  2. Community
  3. Passwords and authentication
  4. Small devices
  5. Tools
  6. Tutorials
  7. Posters
  8. Posters showcasing usable privacy and security papers published in the past year at other conferences
  9. Invited talk
  10. Discussion sessions

Mental models

Revealing hidden context: improving mental models of personal firewall users BIBAKFull-Text 1
  Fahimeh Raja; Kirstie Hawkey; Konstantin Beznosov
The Windows Vista personal firewall provides its diverse users with a basic interface that hides many operational details. However, concealing the impact of network context on the security state of the firewall may result in users developing an incorrect mental model of the protection provided by the firewall. We present a study of participants' mental models of Vista Firewall (VF). We investigated changes to those mental models and their understanding of the firewall's settings after working with both the VF basic interface and our prototype. Our prototype was designed to support development of a more contextually complete mental model through inclusion of network location and connection information. We found that participants produced richer mental models after using the prototype than when working with the VF basic interface; they were also significantly more accurate in their understanding of the configuration of the firewall. Based on our results, we discuss methods of improving user understanding of underlying system states by revealing hidden context, while considering the tension between complexity of the interface and security of the system.
Keywords: configuration, firewall, mental model, usable security
Social applications: exploring a more secure framework BIBAKFull-Text 2
  Andrew Besmer; Heather Richter Lipford; Mohamed Shehab; Gorrell Cheek
Online social network sites, such as MySpace, Facebook and others have grown rapidly, with hundreds of millions of active users. A new feature on many sites is social applications -- applications and services written by third party developers that provide additional functionality linked to a user's profile. However, current application platforms put users at risk by permitting the disclosure of large amounts of personal information to these applications and their developers. This paper formally abstracts and defines the current access control model applied to these applications, and builds on it to create a more secure framework. We do so in the interest of preserving as much of the current architecture as possible, while seeking to provide a practical balance between security and privacy needs of the users, and the needs of the applications to access users' information. We present a user study of our interface design for setting a user-to-application policy. Our results indicate that the model and interface work for users who are more concerned with their privacy, but we still need to explore alternate means of creating policies for those who are less concerned.
Keywords: access control, privacy, security, social networking applications, web 2.0
School of phish: a real-word evaluation of anti-phishing training BIBAKFull-Text 3
  Ponnurangam Kumaraguru; Justin Cranshaw; Alessandro Acquisti; Lorrie Cranor; Jason Hong; Mary Ann Blair; Theodore Pham
PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18-25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.
Keywords: email, embedded training, phishing, real-world studies, usable privacy and security

Community

A "nutrition label" for privacy BIBAKFull-Text 4
  Patrick Gage Kelley; Joanna Bresee; Lorrie Faith Cranor; Robert W. Reeder
We used an iterative design process to develop a privacy label that presents to consumers the ways organizations collect, use, and share personal information. Many surveys have shown that consumers are concerned about online privacy, yet current mechanisms to present website privacy policies have not been successful. This research addresses the present gap in the communication and understanding of privacy policies, by creating an information design that improves the visual presentation and comprehensibility of privacy policies. Drawing from nutrition, warning, and energy labeling, as well as from the effort towards creating a standardized banking privacy notification, we present our process for constructing and refining a label tuned to privacy. This paper describes our design methodology; findings from two focus groups; and accuracy, timing, and likeability results from a laboratory study with 24 participants. Our study results demonstrate that compared to existing natural language privacy policies, the proposed privacy label allows participants to find information more quickly and accurately, and provides a more enjoyable information seeking experience.
Keywords: P3P, information design, labeling, nutrition label, policy, privacy, user interface
Challenges in supporting end-user privacy and security management with social navigation BIBAKFull-Text 5
  Jeremy Goecks; W. Keith Edwards; Elizabeth D. Mynatt
Social navigation is a promising approach for supporting privacy and security management. By aggregating and presenting the choices made by others, social navigation systems can provide users with easily understandable guidance on security and privacy decisions, rather than requiring that they understand low-level technical details in order to make informed decisions. We have developed two prototype systems to explore how social navigation can help users manage their privacy and security. The Acumen system employs social navigation to address a common privacy activity, managing Internet cookies, and the Bonfire system uses social navigation to help users manage their personal firewall. Our experiences with Acumen and Bonfire suggest that, despite the promise of social navigation, there are significant challenges in applying these techniques to the domains of end-user privacy and security management. Due to features of these domains, individuals may misuse community data when making decisions, leading to incorrect individual decisions, inaccurate community data, and "herding" behavior that is an example of what economists term an informational cascade. By understanding this phenomenon in these terms, we develop and present two general approaches for mitigating herding in social navigation systems that support end-user security and privacy management, mitigation via algorithms and mitigation via user interaction. Mitigation via user interaction is a novel and promising approach to mitigating cascades in social navigation systems.
Keywords: acumen, bonfire, decision making, end-user privacy and security, herding, informational cascades, social navigation
Ubiquitous systems and the family: thoughts about the networked home BIBAKFull-Text 6
  Linda Little; Elizabeth Sillence; Pam Briggs
Developments in ubiquitous and pervasive computing herald a future in which computation is embedded into our daily lives. Such a vision raises important questions about how people, especially families, will be able to engage with and trust such systems whilst maintaining privacy and individual boundaries. To begin to address such issues, we have recently conducted a wide reaching study eliciting trust, privacy and identity concerns about pervasive computing. Over three hundred UK citizens participated in 38 focus groups. The groups were shown Videotaped Activity Scenarios [11] depicting pervasive or ubiquitous computing applications in a number of contexts including shopping. The data raises a number of important issues from a family perspective in terms of access, control, responsibility, benefit and complexity. Also findings highlight the conflict between increased functionality and the subtle social interactions that sustain family bonds. We present a Pre-Concept Evaluation Tool (PRECET) for use in design and implementation of ubicomp systems.
Keywords: privacy, social interaction, the family, trust, ubiquitous computing

Passwords and authentication

Look into my eyes!: can you guess my password? BIBAKFull-Text 7
  Alexander De Luca; Martin Denzel; Heinrich Hussmann
Authentication systems for public terminals and thus public spaces have to be fast, easy and secure. Security is of utmost importance since the public setting allows manifold attacks from simple shoulder surfing to advanced manipulations of the terminals. In this work, we present EyePassShapes, an eye tracking authentication method that has been designed to meet these requirements. Instead of using standard eye tracking input methods that require precise and expensive eye trackers, EyePassShapes uses eye gestures. This input method works well with data about the relative eye movement, which is much easier to detect than the precise position of the user's gaze and works with cheaper hardware. Different evaluations on technical aspects, usability, security and memorability show that EyePassShapes can significantly increase security while being easy to use and fast at the same time.
Keywords: EyePassShapes, authentication, eye gestures, eye tracking, privacy, security
Personal choice and challenge questions: a security and usability assessment BIBAKFull-Text 8
  Mike Just; David Aspinall
Challenge questions are an increasingly important part of mainstream authentication solutions, yet there are few published studies concerning their usability or security. This paper reports on an experimental investigation into user-chosen questions. We collected questions from a large cohort of students, in a way that encouraged participants to give realistic data. The questions allow us to consider possible modes of attack and to judge the relative effort needed to crack a question, according to an innovative model of the knowledge of the attacker. Using this model, we found that many participants were likely to have chosen questions with low entropy answers, yet they believed that their challenge questions would resist attacks from a stranger. Though by asking multiple questions, we are able to show a marked improvement in security for most users. In a second stage of our experiment, we applied existing metrics to measure the usability of the questions and answers. Despite having youthful memories and choosing their own questions, users made errors more frequently than desirable.
Keywords: authentication, challenge questions, security, usability
1 + 1 = you: measuring the comprehensibility of metaphors for configuring backup authentication BIBAKFull-Text 9
  Stuart Schechter; Robert W. Reeder
Backup authentication systems verify the identity of users who are unable to perform primary authentication usually as a result of forgetting passwords. The two most common authentication mechanisms used for backup authentication by webmail services, personal authentication questions and email-based authentication, are insufficient. Many webmail users cannot benefit from email-based authentication because their webmail account is their primary email account. Personal authentication questions are frequently forgotten and prone to security failures, as illustrated by the increased scrutiny they received following their implication in the compromise of Republican vice presidential candidate Sarah Palin's Yahoo! account.
   One way to address the limitations of existing backup authentication mechanisms is to add new ones. Since no mechanism is completely secure, system designers must support configurations that require multiple authentication tasks be completed to authenticate. Can users comprehend such a rich set of new options? We designed two metaphors to help users comprehend which combinations of authentication tasks would be sufficient to authenticate. We performed a usability study to measure users' comprehension of these metaphors. We find that the vast majority of users comprehend screenshots that represent authentication as an exam, in which points are awarded for the completion of individual authentication tasks and authentication succeeds when an authenticatee has accumulated enough points to achieve a passing score.
Keywords: authentication, backup authentication, password reset

Small devices

Serial hook-ups: a comparative usability study of secure device pairing methods BIBAFull-Text 10
  Alfred Kobsa; Rahim Sonawalla; Gene Tsudik; Ersin Uzun; Yang Wang
Secure Device Pairing is the bootstrapping of secure communication between two previously unassociated devices over a wireless channel. The human-imperceptible nature of wireless communication, lack of any prior security context, and absence of a common trust infrastructure open the door for Man-in-the-Middle (aka Evil Twin) attacks. A number of methods have been proposed to mitigate these attacks, each requiring user assistance in authenticating information exchanged over the wireless channel via some human-perceptible auxiliary channels, e.g., visual, acoustic or tactile.
   In this paper, we present results of the first comprehensive and comparative study of eleven notable secure device pairing methods. Usability measures include: task performance times, ratings on System Usability Scale (SUS), task completion rates, and perceived security. Study subjects were controlled for age, gender and prior experience with device pairing. We present overall results and identify problematic methods for certain classes of users as well as methods best-suited for various device configurations.
Usability and security of out-of-band channels in secure device pairing protocols BIBAKFull-Text 11
  Ronald Kainda; Ivan Flechais; A. W. Roscoe
Initiating and bootstrapping secure, yet low-cost, ad-hoc transactions is an important challenge that needs to be overcome if the promise of mobile and pervasive computing is to be fulfilled. For example, mobile payment applications would benefit from the ability to pair devices securely without resorting to conventional mechanisms such as shared secrets, a Public Key Infrastructure (PKI), or trusted third parties. A number of methods have been proposed for doing this based on the use of a secondary out-of-band (OOB) channel that either authenticates information passed over the normal communication channel or otherwise establishes an authenticated shared secret which can be used for subsequent secure communication. A key element of the success of these methods is dependent on the performance and effectiveness of the OOB channel, which usually depends on people performing certain critical tasks correctly.
   In this paper, we present the results of a comparative usability study on methods that propose using humans to implement the OOB channel and argue that most of these proposals fail to take into account factors that may seriously harm the security and usability of a protocol. Our work builds on previous research in the usability of pairing methods and the accompanying recommendations for designing user interfaces that minimise human mistakes. Our findings show that the traditional methods of comparing and typing short strings into mobile devices are still preferable despite claims that new methods are more usable and secure, and that user interface design alone is not sufficient in mitigating human mistakes in OOB channels.
Keywords: pairing devices, security protocols, usability
Games for extracting randomness BIBAFull-Text 12
  Ran Halprin; Moni Naor
Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhance the quality of the randomness in the system. This idea has two motivations: (i) results in experimental psychology indicate that humans are able to behave quite randomly when engaged in competitive games in which a mixed strategy is optimal, and (ii) people have an affection for games, and this leads to longer play yielding more entropy overall. While the resulting strings are not perfectly random, we show how to integrate such a game into a robust pseudo-random generator that enjoys backward and forward security.
   We construct a game suitable for randomness extraction, and test users playing patterns. The results show that in less than two minutes a human can generate 128 bits that are 2-64-close to random, even on a limited computer such as a PDA that might have no other entropy source.
   As proof of concept, we supply a complete working software for a robust PRG. It generates random sequences based solely on human game play, and thus does not depend on the Operating System or any external factor.

Tools

Sanitization's slippery slope: the design and study of a text revision assistant BIBAKFull-Text 13
  Richard Chow; Ian Oberst; Jessica Staddon
For privacy reasons, sensitive content may be revised before it is released. The revision often consists of redaction, that is, the "blacking out" of sensitive words and phrases. Redaction has the side effect of reducing the utility of the content, often so much that the content is no longer useful. Consequently, government agencies and others are increasingly exploring the revision of sensitive content as an alternative to redaction that preserves more content utility. We call this practice sanitization. In a sanitized document, names might be replaced with pseudonyms and sensitive attributes might be replaced with hypernyms. Sanitization adds to redaction the challenge of determining what words and phrases reduce the sensitivity of content. We have designed and developed a tool to assist users in sanitizing sensitive content. Our tool leverages the Web to automatically identify sensitive words and phrases and quickly evaluates revisions for sensitivity. The tool, however, does not identify all sensitive terms and mistakenly marks some innocuous terms as sensitive. This is unavoidable because of the difficulty of the underlying inference problem and is the main reason we have designed a sanitization assistant as opposed to a fully-automated tool. We have conducted a small study of our tool in which users sanitize biographies of celebrities to hide the celebrity's identity both both with and without our tool. The user study suggests that while the tool is very valuable in encouraging users to preserve content utility and can preserve privacy, this usefulness and apparent authoritativeness may lead to a "slippery slope" in which users neglect their own judgment in favor of the tool's.
Keywords: data loss prevention, inference detection, privacy, redaction, sanitization
Balancing usability and security in a video CAPTCHA BIBAKFull-Text 14
  Kurt Alfred Kluever; Richard Zanibbi
We present a technique for using content-based video labeling as a CAPTCHA task. Our CAPTCHAs are generated from YouTube videos, which contain labels (tags) supplied by the person that uploaded the video. They are graded using a video's tags, as well as tags from related videos. In a user study involving 184 participants, we were able to increase the human success rate on our video CAPTCHA from roughly 70% to 90%, while keeping the success rate of a tag frequency-based attack fixed at around 13%. Through a different parameterization of the challenge generation and grading algorithms, we were able to reduce the success rate of the same attack to 2%, while still increasing the human success rate from 70% to 75%. The usability and security of our video CAPTCHA appears to be comparable to existing CAPTCHAs, and a majority of participants (60%) indicated that they found the video CAPTCHAs more enjoyable than traditional CAPTCHAs in which distorted text must be transcribed.
Keywords: Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), human interactive proof (HIP), tagging, video understanding
How users use access control BIBAKFull-Text 15
  D. K. Smetters; Nathan Good
Existing technologies for file sharing differ widely in the granularity of control they give users over who can access their data; achieving finer-grained control generally requires more user effort. We want to understand what level of control users need over their data, by examining what sorts of access policies users actually create in practice.
   We used automated data mining techniques to examine the real-world use of access control features present in standard document sharing systems in a corporate environment as used over a long (> 10 year) time span. We find that while users rarely need to change access policies, the policies they do express are actually quite complex. We also find that users participate in larger numbers of access control and email sharing groups than measured by self-report in previous studies. We hypothesize that much of this complexity might be reduced by considering these policies as examples of simpler access control patterns. From our analysis of what access control features are used and where errors are made, we propose a set of design guidelines for access control systems themselves and the tools used to manage them, intended to increase usability and decrease error.
Keywords: access control, file sharing, usability

Tutorials

Designing and evaluating usable security and privacy technology BIBFull-Text 16
  M. Angela Sasse; Clare-Marie Karat; Roy Maxion
Think Evil (tm) BIBFull-Text 17
  Nicholas Weaver

Posters

Threshold things that think: usable authorization for resharing BIBFull-Text 18
  Roel Peeters; Markulf Kohlweiss; Bart Preneel; Nicky Sulmon
Not one click for security? BIBFull-Text 19
  Alan Karp; Marc Stiegler; Tyler Close
Privacy stories: confidence in privacy behaviors through end user programming BIBFull-Text 20
  Luke Church; Jonathan Anderson; Joesph Bonneau; Frank Stajano
A new graphical password scheme against spyware by using CAPTCHA BIBFull-Text 21
  Haichang Gao; Xiyang Liu
The impact of expressiveness on the effectiveness of privacy mechanisms for location-sharing BIBFull-Text 22
  Michael Benisch; Patrick Gage Kelley; Norman Sadeh; Tuomas Sandholm; Janice Tsai; Lorrie Faith Cranor; Paul Hankes Drielsma
Designing for different levels of social inference risk BIBFull-Text 23
  Sara Motahari; Sotirios Ziavras; Quentin Jones
Integrating usability and accessibility in information assurance education BIBFull-Text 24
  Azene Zenebe; Claude Tuner; Jinjuan Feng; Jonathan Lazar; Mike O'Leary
Educated guess on graphical authentication schemes: vulnerabilities and countermeasures BIBFull-Text 25
  Eiji Hayashi; Jason Hong; Nicolas Christin
BayeShield: conversational anti-phishing user interface BIBFull-Text 26
  Peter Likarish; Don Dunbar; Juan Pablo Hourcade; Eunjin Jung
Recall-a-story, a story-telling graphical password system BIBFull-Text 27
  Yves Maetz; Stéphane Onno; Olivier Heen
Escape from the matrix: lessons from a case-study in access-control requirements BIBFull-Text 28
  Kathi Fisler; Shriram Krishnamurthi
The impact of privacy indicators on search engine browsing patterns BIBFull-Text 29
  Janice Tsai; Serge Egelman; Lorrie Cranor; Alessandro Acquisti
Privacy suites: shared privacy for social networks BIBFull-Text 30
  Joseph Bonneau; Jonathan Anderson; Luke Church
Usable deidentification of sensitive patient care data BIBFull-Text 31
  Michael McQuaid; Kai Zheng; Nigel Melville; Lee Green
Analyzing use of privacy policy attributes in a location sharing application BIBFull-Text 32
  Eran Toch; Ramprasad Ravichandran; Lorrie Cranor; Paul Drielsma; Jason Hong; Patrick Kelley; Norman Sadeh; Janice Tsai
Studying location privacy in mobile applications: 'predator vs. prey' probes BIBFull-Text 33
  Keerthi Thomas; Clara Mancini; Lukasz Jedrzejczyk; Arosha K. Bandara; Adam Joinson; Blaine A. Price; Yvonne Rogers; Bashar Nuseibeh
Treat 'em like other devices: user authentication of multiple personal RFID tags BIBFull-Text 34
  Nitesh Saxena; Md. Borhan Uddin; Jonathan Voris
Textured agreements: re-envisioning electronic consent BIBFull-Text 35
  Matthew Kay; Michael Terry
A multi-method approach for user-centered design of identity management systems BIBFull-Text 36
  Pooya Jaferian; David Botta; Kirstie Hawkey; Konstantin Beznosov

Posters showcasing usable privacy and security papers published in the past year at other conferences

flyByNight: mitigating the privacy risks of social networking BIBFull-Text 37
  Matthew Lucas; Nikita Borisov
Conditioned-safe ceremonies and a user study of an application to web authentication BIBFull-Text 38
  Chris Karlof; J. D. Tygar; David Wagner
Graphical passwords as browser extension: implementation and usability study BIBFull-Text 39
  Kemal Bicakci; Mustafa Yuceel; Burak Erdeniz; Hakan Gurbaslar; Nart Bedin Atalay
It's no secret: measuring the security and reliability of authentication via 'secret' questions BIBFull-Text 40
  Stuart Schechter; A. J. Bernheim Brush; Serge Egelman
It's not what you know, but who you know: a social approach to last-resort authentication BIBFull-Text 41
  Stuart Schechter; Serge Egelman; Robert W. Reeder
A user study of the expandable grid applied to P3P privacy policy visualization BIBFull-Text 42
  Robert W. Reeder; Patrick Gage Kelley; Aleecia M. McDonald; Lorrie Faith Cranor
Who's viewed you?: the impact of feedback in a mobile location-sharing application BIBFull-Text 43
  Janice Tsai; Patrick Kelley; Paul Hankes Drielsma; Lorrie Cranor; Jason Hong; Norman Sadeh
New directions in multisensory authentication BIBFull-Text 44
  Madoka Hasegawa; Nicolas Christin; Eiji Hayashi
Machine learning attacks against the Asirra CAPTCHA BIBFull-Text 45
  Philippe Golle
A comparative study of online privacy policies and formats BIBFull-Text 46
  Aleecia M. McDonald; Robert W. Reeder; Patrick Gage Kelley; Lorrie Faith Cranor
Capturing social networking privacy preferences: can default policies help alleviate tradeoffs between expressiveness and user burden? BIBFull-Text 47
  Ramprasad Ravichandran; Michael Benisch; Patrick Gauge Kelley; Norman Sadeh

Invited talk

Redirects to login pages are bad, or are they? BIBFull-Text 48
  Eric Sachs

Discussion sessions

Short and long term research suggestions for NSF and NIST BIBFull-Text 49
  Nancy Gillis
Ecological validity in studies of security and human behaviour BIBFull-Text 50
  Andrew Patrick
Invisible HCI-SEC: ways of re-architecting the operating system to increase usability and security BIBFull-Text 51
  Simson Garfinkel
Technology transfer of successful usable security research into product BIBFull-Text 52
  Mary Ellen Zurko
The family and communication technologies BIBFull-Text 53
  Linda Little
How does the emergence of reputation mechanisms affect the overall trust formation mechanisms, implicit and explicit, in the online environment? BIBFull-Text 54
  Kristiina Karvonen