HCI Bibliography Home | HCI Conferences | SOUPS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
SOUPS Tables of Contents: 0506070809101112131415

Proceedings of the 2005 Symposium on Usable Privacy and Security

Fullname:Symposium on Usable Privacy and Security
Editors:Lorrie Faith Cranor; Mary Ellen Zurko
Location:Pittsburgh, Pennsylvania
Dates:2005-Jul-06 to 2005-Jul-08
Standard No:ISBN 1-59593-178-3; ACM DL: Table of Contents hcibib: SOUPS05
Links:Conference Home Page
  1. Usable Security
  2. Usable Privacy
  3. Visualizing Security

Usable Security

Authentication using graphical passwords: effects of tolerance and image choice BIBAFull-TextPDF 1-12
  Susan Wiedenbeck; Jim Waters; Jean-Camille Birget; Alex Brodskiy; Nasir Memon
Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to memorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.
Johnny 2: a user test of key continuity management with S/MIME and Outlook Express BIBAFull-TextPDF 13-24
  Simson L. Garfinkel; Robert C. Miller
Secure email has struggled with significant obstacles to adoption, among them the low usability of encryption software and the cost and overhead of obtaining public key certificates. Key continuity management (KCM) has been proposed as a way to lower these barriers to adoption, by making key generation, key management, and message signing essentially automatic. We present the first user study of KCM-secured email, conducted on naive users who had no previous experience with secure email. Our secure email prototype, CoPilot, color-codes messages depending on whether they were signed and whether the signer was previously known or unknown. This interface makes users significantly less susceptible to social engineering attacks overall, but new-identity attacks (from email addresses never seen before) are still effective. Also, naive users do use the Sign and Encrypt button on the Outlook Express toolbar when the situation seems to warrant it, even without explicit instruction, although some falsely hoped that Encrypt would protect a secret message even when sent directly to an attacker. We conclude that KCM is a workable model for improving email security today, but work is needed to alert users to "phishing" attacks.
Two experiences designing for effective security BIBAFull-TextPDF 25-34
  Rogério de Paula; Xianghua Ding; Paul Dourish; Kari Nies; Ben Pillet; David Redmiles; Jie Ren; Jennifer Rode; Roberto Silva Filho
In our research, we have been concerned with the question of how to make relevant features of security situations visible to users in order to allow them to make informed decisions regarding potential privacy and security problems, as well as regarding potential implications of their actions. To this end, we have designed technical infrastructures that make visible the configurations, activities, and implications of available security mechanisms. This thus allows users to make informed choices and take coordinated and appropriate actions when necessary. This work differs from the more traditional security usability work in that our focus is not only on the usability of security mechanism (e.g., the ease-of-use of an access control interface), but how security can manifest itself as part of people's interactions with and through information systems (i.e., how people experience and interpret privacy and security situations, and are enabled or constrained by existing technological mechanisms to act appropriately). In this paper, we report our experiences designing, developing, and testing two technical infrastructures for supporting this approach for usable security.

Usable Privacy

Usable security and privacy: a case study of developing privacy management tools BIBAFull-TextPDF 35-43
  Carolyn Brodie; Clare-Marie Karat; John Karat; Jinjuan Feng
Privacy is a concept which received relatively little attention during the rapid growth and spread of information technology through the 1980's and 1990's. Design to make information easily accessible, without particular attention to issues such as whether an individual had a desire or right to control access to and use of particular information was seen as the more pressing goal. We believe that there will be an increasing awareness of a fundamental need to address privacy concerns in information technology, and that doing so will require an understanding of policies that govern information use as well as the development of technologies that can implement such policies. The research reported here describes our efforts to design a privacy management workbench which facilitates privacy policy authoring, implementation, and compliance monitoring. This case study highlights the work of identifying organizational privacy requirements, analyzing existing technology, on-going research to identify approaches that address these requirements, and iteratively designing and validating a prototype with target users for flexible privacy technologies.
Stopping spyware at the gate: a user study of privacy, notice and spyware BIBAFull-TextPDF 43-52
  Nathaniel Good; Rachna Dhamija; Jens Grossklags; David Thaw; Steven Aronowitz; Deirdre Mulligan; Joseph Konstan
Spyware is a significant problem for most computer users. The term "spyware" loosely describes a new class of computer software. This type of software may track user activities online and offline, provide targeted advertising and/or engage in other types of activities that users describe as invasive or undesirable.
   While the magnitude of the spyware problem is well documented, recent studies have had only limited success in explaining the broad range of user behaviors that contribute to the proliferation of spyware. As opposed to viruses and other malicious code, users themselves often have a choice whether they want to install these programs.
   In this paper, we discuss an ecological study of users installing five real world applications. In particular, we seek to understand the influence of the form and content of notices (e.g., EULAs) on user's installation decisions.
   Our study indicates that while notice is important, notice alone may not be enough to affect users' decisions to install an application. We found that users have limited understanding of EULA content and little desire to read lengthy notices. Users found short, concise notices more useful, and noticed them more often, yet they did not have a significant effect on installation for our population. When users were informed of the actual contents of the EULAs to which they agreed, we found that users often regret their installation decisions.
   We discovered that regardless of the bundled content, users will often install an application if they believe the utility is high enough. However, we discovered that privacy and security become important factors when choosing between two applications with similar functionality. Given two similar programs (e.g. KaZaA and Edonkey), consumers will choose the one they believe to be less invasive and more stable. We also found that providing vague information in EULAs and short notices can create an unwarranted impression of increased security. In these cases, it may be helpful to have a standardized format for assessing the possible options and trade-offs between applications.
Making PRIME usable BIBAFull-TextPDF 53-64
  John Sören Pettersson; Simone Fischer-Hübner; Ninni Danielsson; Jenny Nilsson; Mike Bergmann; Sebastian Clauss; Thomas Kriegelstein; Henry Krasemann
Privacy-enhanced Identity Management can enable users to retain and maintain informational self-determination in our networked society. This paper describes the usability research work that has been done within the first year of the European Union project on "Privacy and Identity Management for Europe" (PRIME). It primarily discusses and compares three alternative UI paradigms for privacy-enhanced Identity Management, and presents how important legal privacy principles derived from the European Union Directives have been mapped into suggestions of user interface solutions for PRIME. Besides, it discusses results and encountered problems from conducted usability tests on mock-ups implementing the different UI paradigms and proposes means for addressing those problems. The paper concludes with remarks on the characteristics of usability work for privacy-enhancing technologies.
Developing privacy guidelines for social location disclosure applications and services BIBAFull-TextPDF 65-76
  Giovanni Iachello; Ian Smith; Sunny Consolvo; Mike Chen; Gregory D. Abowd
In this article, we describe the design process of Reno, a location-enhanced, mobile coordination tool and person finder. The design process included three field experiments: a formative Experience Sampling Method (ESM) study, a pilot deployment and an extended user study. These studies were targeted at the significant personal security, privacy and data protection concerns caused by this application. We distill this experience into a small set of guidelines for designers of social mobile applications and show how these guidelines can be applied to a different application, called Boise. These guidelines cover issues pertaining to personal boundary definition, control, deception and denial, and group vs. individual communication. We also report on lessons learned from our evaluation experience, which might help practitioners in designing novel mobile applications, including the choice and characterization of users for testing security and privacy features of designs, the length of learning curves and their effect on evaluation and the impact of peculiar deployment circumstances on the results of these finely tuned user studies.

Visualizing Security

The battle against phishing: Dynamic Security Skins BIBAFull-TextPDF 77-88
  Rachna Dhamija; J. D. Tygar
Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.
   We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.
   We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.
   Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a "skin" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.
   We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.
Attacking information visualization system usability overloading and deceiving the human BIBAFull-TextPDF 89-100
  Gregory Conti; Mustaque Ahamad; John Stasko
Information visualization is an effective way to easily comprehend large amounts of data. For such systems to be truly effective, the information visualization designer must be aware of the ways in which their system may be manipulated and protect their users from attack. In addition, users should be aware of potential attacks in order to minimize or negate their effect. These attacks target the information visualization system as well as the perceptual, cognitive and motor capabilities of human end users. To identify and help counter these attacks we present a framework for information visualization system security analysis, a taxonomy of visualization attacks and technology independent principles for countering malicious visualizations. These themes are illustrated with case studies and working examples from the network security visualization domain, but are widely applicable to virtually any information visualization system.
Social navigation as a model for usable security BIBAFull-TextPDF 101-108
  Paul DiGioia; Paul Dourish
As interest in usable security spreads, the use of visual approaches in which the functioning of a distributed system is made visually available to end users is an approach that a number of researchers have examined. In this paper, we discuss the use of the social navigation paradigm as a way of organizing visual displays of system action. Drawing on a previous study of security in the Kazaa peer to peer system, we present some examples of the ways in which social navigation can be incorporated in support of usable security.