HCI Bibliography Home | HCI Conferences | SOUPS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
SOUPS Tables of Contents: 0506070809101112131415

Proceedings of the 2011 Symposium on Usable Privacy and Security

Fullname:Symposium on Usable Privacy and Security
Editors:Lorrie Faith Cranor
Location:Pittsburgh, Pennsylvania
Dates:2011-Jul-20 to 2011-Jul-22
Standard No:ISBN 1-4503-0911-0, 978-1-4503-0911-0; ACM DL: Table of Contents hcibib: SOUPS11
Links:Conference Home Page
Summary:Welcome to the Seventh Symposium On Usable Privacy and Security! This year's program features 15 technical papers, two workshops, two tutorials 12 posters, 12 posters published in the past year at other conferences, a panel, a lightning talks session, and an invited talk. On Thursday evening SOUPS 2011 attendees will enjoy a dinner at the Pittsburgh Zoo and Aquarium.
    This year we received 45 technical paper submissions. The program committee provided two rounds of reviews. In the first round papers received an average of three reviews. In the second round, papers that had received one or more reviews better than "weak reject" in the first round received additional reviews. The goal of the second round was to ensure that a consistent standard of acceptance could be applied across all papers and, to this end, papers received as many as six reviews. We held an in-person program committee meeting (a SOUPS first) on Friday, the 13th of May. Fifteen papers were selected for presentation and publication.
  1. Security warnings
  2. Authentication
  3. SOUPS du jour
  4. Privacy on social network sites
  5. Perceptions of privacy and security

Security warnings

A brick wall, a locked door, and a bandit: a physical security metaphor for firewall warnings BIBAFull-Text 1
  Fahimeh Raja; Kirstie Hawkey; Steven Hsu; Kai-Le Clement Wang; Konstantin Beznosov
We used an iterative process to design firewall warnings in which the functionality of a personal firewall is visualized based on a physical security metaphor. We performed a study to determine the degree to which our proposed warnings are understandable for users, and the degree to which they convey the risks and encourage safe behavior as compared to text warnings based on those from a popular personal firewall. The evaluation results show that our warnings facilitate the comprehension of warning information, better communicate the risk, and increase the likelihood of safe behavior. Moreover, they provide participants with a better understanding of both the functionality of a personal firewall and the consequences of their actions.
Using data type based security alert dialogs to raise online security awareness BIBAFull-Text 2
  Max-Emanuel Maurer; Alexander De Luca; Sylvia Kempe
When browsing the Internet, users are likely to be exposed to security and privacy threats -- like fraudulent websites. Automatic browser mechanisms can protect them only to some extent. In other situations it is still important to raise the users' security awareness at the right moment. Passive indicators are mostly overlooked and blocking warnings are quickly dismissed by habituated users. In this work, we present a new concept of warnings that appear in-context, right next to data the user has just entered. Those dialogs are displayed whenever critical data types -- e.g. credit card data -- are entered by the users into online forms. Since they do not immediately interrupt the users' interaction but appear right in the users' focus, it is possible to place important security information in a way that it can be easily seen.
   We implemented the concept as a Firefox plugin and evaluated it in a row of studies including two lab studies, one focus group and one real world study. Results show that the concept is very well accepted by the users and that with the plugin, especially non-expert participants were more likely to identify fraudulent (or phishing) websites than using the standard browser warnings. Besides this, we were able to gather interesting findings on warning usage.
On the challenges in usable security lab studies: lessons learned from replicating a study on SSL warnings BIBAFull-Text 3
  Andreas Sotirakopoulos; Kirstie Hawkey; Konstantin Beznosov
We replicated and extended a 2008 study conducted at CMU that investigated the effectiveness of SSL warnings. We adjusted the experimental design to mitigate some of the limitations of that prior study; adjustments include allowing participants to use their web browser of choice and recruiting a more representative user sample. However, during our study we observed a strong disparity between our participants actions during the laboratory tasks and their self-reported "would be" actions during similar tasks in everyday computer practices. Our participants attributed this disparity to the laboratory environment and the security it offered. In this paper we discuss our results and how the introduced changes to the initial study design may have affected them. Also, we discuss the challenges of observing natural behavior in a study environment, as well as the challenges of replicating previous studies given the rapid changes in web technology. We also propose alternatives to traditional laboratory study methodologies that can be considered by the usable security research community when investigating research questions involving sensitive data where trust may influence behavior.


What makes users refuse web single sign-on?: an empirical investigation of OpenID BIBAFull-Text 4
  San-Tsai Sun; Eric Pospisil; Ildar Muslukhov; Nuray Dindar; Kirstie Hawkey; Konstantin Beznosov
OpenID is an open and promising Web single sign-on (SSO) solution. This work investigates the challenges and concerns web users face when using OpenID for authentication, and identifies what changes in the login flow could improve the users' experience and adoption incentives. We found our participants had several behaviors, concerns, and misconceptions that hinder the OpenID adoption process: (1) their existing password management strategies reduce the perceived usefulness of SSO; (2) many (26%) expressed concerns with single-point-of-failure related issues; (3) most (71%) held the incorrect belief that the OpenID credentials are being given to the content providers; (4) half exhibited an inability to distinguish a fake Google login form, even when prompted; (5) many (40%) were hesitant to consent to the release of their personal profile information; and (6) many (36%) expressed concern with the use of SSO on websites that contain valuable personal information or, conversely, are not trustworthy. We also found that with an improved affordance and privacy control, more than 60% of study participants would use Web SSO solutions on the websites they trust.
Breaking undercover: exploiting design flaws and nonuniform human behavior BIBAFull-Text 5
  Toni Perkoviæ; Shujun Li; Asma Mumtaz; Syed Ali Khayam; Yousra Javed; Mario Èagalj
This paper reports two attacks on Undercover, a human authentication scheme against passive observers proposed at CHI 2008. The first attack exploits nonuniform human behavior in responding to authentication challenges and the second one is based on information leaked from authentication challenges or responses visible to the attacker. The second attack can be generalized to break two alternative Undercover designs presented at Pervasive 2009. All the attacks exploit design flaws of the Undercover implementations.
   Theoretical and experimental analyses show that both attacks can reveal the user's password with high probability with O(10) observed login sessions. Both attacks were verified by using the login data collected in a user study with 28 participants. We also propose some enhancements to make Undercover secure against the attacks reported in this paper.
   Our research in breaking and improving Undercover leads to two broader implications. First, it reemphasizes the principle of "devil is in details" for the design of security-related human-computer interface. Secondly, it reveals a subtle relationship between security and usability: human users may behave in an insecure way to compromise the security of a system. To design a secure human-computer interface, designers should pay special attention to possible negative influence of any detail of the interface including how human users interact with the system.
Shoulder surfing defence for recall-based graphical passwords BIBAFull-Text 6
  Nur Haryani Zakaria; David Griffiths; Sacha Brostoff; Jeff Yan
Graphical passwords are often considered prone to shoulder-surfing attacks, where attackers can steal a user's password by peeking over his or her shoulder in the authentication process. In this paper, we explore shoulder surfing defence for recall-based graphical password systems such as Draw-A-Secret and Background Draw-A-Secret, where users doodle their passwords (i.e. secrets) on a drawing grid. We propose three innovative shoulder surfing defence techniques, and conduct two separate controlled laboratory experiments to evaluate both security and usability perspectives of the proposed techniques. One technique was expected to work to some extent theoretically, but it turned out to provide little protection. One technique provided the best overall shoulder surfing defence, but also caused some usability challenges. The other technique achieved reasonable shoulder surfing defence and good usability simultaneously, a good balance which the two other techniques did not achieve. Our results appear to be also relevant to other graphical password systems such as Pass-Go.

SOUPS du jour

Heuristics for evaluating IT security management tools BIBAFull-Text 7
  Pooya Jaferian; Kirstie Hawkey; Andreas Sotirakopoulos; Maria Velez-Rojas; Konstantin Beznosov
The usability of IT security management (ITSM) tools is hard to evaluate by regular methods, making heuristic evaluation attractive. However, standard usability heuristics are hard to apply as IT security management occurs within a complex and collaborative context that involves diverse stakeholders. We propose a set of ITSM usability heuristics that are based on activity theory, are supported by prior research, and consider the complex and cooperative nature of security management. In a between-subjects study, we compared the employment of the ITSM and Nielsen's heuristics for evaluation of a commercial identity management system. Participants who used the ITSM set found more problems categorized as severe than those who used Nielsen's. As evaluators identified different types of problems with the two sets of heuristics, we recommend employing both the ITSM and Nielsen's heuristics during evaluation of ITSM tools.
Smartening the crowds: computational techniques for improving human verification to fight phishing scams BIBAFull-Text 8
  Gang Liu; Guang Xiang; Bryan A. Pendleton; Jason I. Hong; Wenyin Liu
Phishing is an ongoing kind of semantic attack that tricks victims into inadvertently sharing sensitive information. In this paper, we explore novel techniques for combating the phishing problem using computational techniques to improve human effort. Using tasks posted to the Amazon Mechanical Turk human effort market, we measure the accuracy of minimally trained humans in identifying potential phish, and consider methods for best taking advantage of individual contributions. Furthermore, we present our experiments using clustering techniques and vote weighting to improve the results of human effort in fighting phishing. We found that these techniques could increase coverage over and were significantly faster than existing blacklists used today.
Reciprocity attacks BIBAFull-Text 9
  Feng Zhu; Sandra Carpenter; Ajinkya Kulkarni; Swapna Kolimi
In mobile and pervasive computing environments, users may easily exchange information via ubiquitously available computers ranging from sensors, embedded processors, wearable and handheld devices, to servers. The unprecedented level of interaction between users and intelligent environments poses unparalleled privacy challenges. We identify a new attack that can be used to acquire users' private information -- using reciprocity norms. By mutually exchanging information with users, an attacker may use a psychological method, the norm of reciprocity, to acquire users' private information. We implemented software to provide a rich shopping experience in a mobile and pervasive computing environment and embedded the reciprocity attack. Our experiments showed that participants were more willing to provide some types of private information under reciprocity attacks. To the best of our knowledge, this is the first attempt to understand the impact of the norm of reciprocity as an attack in mobile and pervasive computing environments. These human factors should be taken into consideration when designing security measures to protect people's privacy.

Privacy on social network sites

"I regretted the minute I pressed share": a qualitative study of regrets on Facebook BIBAFull-Text 10
  Yang Wang; Gregory Norcie; Saranga Komanduri; Alessandro Acquisti; Pedro Giovanni Leon; Lorrie Faith Cranor
We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a "hot" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.
ROAuth: recommendation based open authorization BIBAFull-Text 11
  Mohamed Shehab; Said Marouf; Christopher Hudel
Many major online platforms such as Facebook, Google, and Twitter, provide an open Application Programming Interface which allows third party applications to access user resources. The Open Authorization protocol (OAuth) was introduced as a secure and efficient method for authorizing third party applications without releasing a user's access credentials. However, OAuth implementations don't provide the necessary fine-grained access control, nor any recommendations vis-a-vis which access control decisions are most appropriate. We propose an extension to the OAuth 2.0 authorization that enables the provisioning of fine-grained authorization recommendations to users when granting permissions to third party applications. We propose a mechanism that computes permission ratings based on a multi-criteria recommendation model which utilizes previous user decisions, and application requests to enhance the privacy of the overall site's user population. We implemented our proposed OAuth extension as a browser extension that allows users to easily configure their privacy settings at application installation time, provides recommendations on requested privacy attributes, and collects data regarding user decisions. Experiments on the collected data indicate that the proposed framework efficiently enhanced the user awareness and privacy related to third party application authorizations.
Privacy: is there an app for that? BIBAFull-Text 12
  Jennifer King; Airi Lampinen; Alex Smolen
Users of social networking sites (SNSs) increasingly must learn to negotiate privacy online with multiple service providers. Facebook's third-party applications (apps) add an additional layer of complexity and confusion for users seeking to understand and manage their privacy. We conducted a novel exploratory survey (conducted on Facebook as a Platform app) to measure how Facebook app users interact with apps, what they understand about how apps access and exchange their profile information, and how these factors relate to their privacy concerns. In our analysis, we paid special attention to our most knowledgeable respondents: given their expertise, would they differ in behaviors or attitudes from less knowledgeable respondents? We found that misunderstandings and confusion abound about how apps function and how they manage profile data. Against our expectations, knowledge or behavior weren't consistent predictors of privacy concerns with third-party apps or on SNSs in general. Instead, whether or not the respondent experienced an adverse privacy event on a social networking site was a reliable predictor of privacy attitudes.

Perceptions of privacy and security

Home is safer than the cloud!: privacy concerns for consumer cloud storage BIBAFull-Text 13
  Iulia Ion; Niharika Sachdeva; Ponnurangam Kumaraguru; Srdjan Èapkun
Several studies ranked security and privacy to be major areas of concern and impediments of cloud adoption for companies, but none have looked into end-users' attitudes and practices. Not much is known about consumers' privacy beliefs and expectations for cloud storage, such as web-mail, document and photo sharing platforms, or about users' awareness of contractual terms and conditions. We conducted 36 in-depth interviews in Switzerland and India (two countries with different privacy perceptions and expectations); and followed up with an online survey with 402 participants in both countries. We study users' privacy attitudes and beliefs regarding their use of cloud storage systems. Our results show that privacy requirements for consumer cloud storage differ from those of companies. Users are less concerned about some issues, such as guaranteed deletion of data, country of storage and storage outsourcing, but are uncertain about using cloud storage. Our results further show that end-users consider the Internet intrinsically insecure and prefer local storage for sensitive data over cloud storage. However, users desire better security and are ready to pay for services that provide strong privacy guarantees. Participants had misconceptions about the rights and guarantees their cloud storage providers offers. For example, users believed that their provider is liable in case of data loss, does not have the right to view and modify user data, and cannot disable user accounts. Finally, our results show that cultural differences greatly influence user attitudes and beliefs, such as their willingness to store sensitive data in the cloud and their acceptance that law enforcement agencies monitor user accounts. We believe that these observations can help in improving users privacy in cloud storage systems.
Eyeing your exposure: quantifying and controlling information sharing for improved privacy BIBAFull-Text 14
  Roman Schlegel; Apu Kapadia; Adam J. Lee
A large body of research has focused on disclosure policies for controlling information release in social sharing (e.g., location-based) applications. However, less work has considered how exposed these policies actually leave users; i.e., to what extent are disclosures in compliance with these policies actually being made? For instance, consider a disclosure policy granting Alice's coworkers access to her location during work hours. Alice might feel that this policy appropriately controls her exposure, but may feel differently if she learned that her boss was accessing her location every 5 minutes. In addition to specifying who has access to personal information, users need a way to quantify, interpret, and control the extent to which this data is shared.
   We propose and evaluate an intuitive mechanism for summarizing and controlling a user's exposure on smartphone-based platforms. Our approach uses the visual metaphor of eyes appearing and growing in size on the home screen; the rate at which these eyes grow depends on the number of accesses granted for a user's location, and the type of person (e.g., family vs. friend) making these accesses. This approach gives users an accurate and ambient sense of their exposure and helps them take actions to limit their exposure, all without explicitly identifying the social contacts making requests. Through two systematic user studies (N = 43,41) we show that our interface is indeed effective at summarizing complex exposure information and provides comparable information to a more cumbersome interface presenting more detailed information.
Indirect content privacy surveys: measuring privacy without asking about it BIBAFull-Text 15
  Alex Braunstein; Laura Granka; Jessica Staddon
The strong emotional reaction elicited by privacy issues is well documented (e.g., [12, 8]). The emotional aspect of privacy makes it difficult to evaluate privacy concern, and directly asking about a privacy issue may result in an emotional reaction and a biased response. This effect may be partly responsible for the dramatic privacy concern ratings coming from recent surveys, ratings that often seem to be at odds with user behavior. In this paper we propose indirect techniques for measuring content privacy concerns through surveys, thus hopefully diminishing any emotional response. We present a design for indirect surveys and test the design's use as (1) a means to measure relative privacy concerns across content types, (2) a tool for predicting unwillingness to share content (a possible indicator of privacy concern), and (3) a gauge for two underlying dimensions of privacy -- content importance and the willingness to share content. Our evaluation consists of 3 surveys, taken by 200 users each, in which privacy is never asked about directly, but privacy warnings are issued with increasing escalation in the instructions and individual question-wording. We demonstrate that this escalation results in statistically and practically significant differences in responses to individual questions. In addition, we compare results against a direct privacy survey and show that rankings of privacy concerns are increasingly preserved as privacy language increases in the indirect surveys, thus indicating our mapping of the indirect questions to privacy ratings is accurately reflecting privacy concerns.