HCI Bibliography Home | HCI Journals | About JUS | Journal Info | JUS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
JUS Tables of Contents: 010203040506070809

Journal of Usability Studies 1

Editors:Avi Parush
Dates:2005/2006
Volume:1
Publisher:Usability Professionals' Association
Standard No:ISSN 1931-3357
Papers:18
Links:Journal Home Page | Table of Contents
  1. JUS 2005 Volume 1 Issue 1
  2. JUS 2006 Volume 1 Issue 2
  3. JUS 2006 Volume 1 Issue 3
  4. JUS 2006 Volume 1 Issue 4

JUS 2005 Volume 1 Issue 1

Introduction BIBHTML i
  Avi Parush
Usability Testing of Mobile Applications: A Comparison between Laboratory and Field Testing BIBAHTMLPDF 4-17
  Anne Kaikkonen; Aki Kekäläinen; Mihael Cankar; Titti Kallio; Anu Kankainen
Usability testing a mobile application in the laboratory seems to be sufficient when studying user interface and navigation issues. The usability of a consumer application was tested in two environments: in a laboratory and in a field with a total of 40 test users. The same problems were found in both environments, differences occurred in the frequency of findings between the contexts. Results indicate that conducting a time-consuming field test may not be worthwhile when searching user interface flaws to improve user interaction. In spite of this, it is possible that field testing is worthwhile when combining usability tests with a field pilot or contextual study where user behavior is investigated in a natural context.
Iterative Usability Testing as Continuous Feedback: A Control Systems Perspective BIBAHTMLPDF 18-27
  Alex Genov
This paper argues that in the field of usability, debates about number of users, the use of statistics, etc. in the abstract are pointless and even counter-productive. We propose that the answers depend on the research questions and business objectives of each project and thus cannot be discussed in absolute terms. Sometimes usability testing is done with an implicit or explicit hypothesis in mind. At other times the purpose of testing is to guide iterative design. These two approaches call for different study designs and treatment of data. We apply control systems theory to the topic of usability to highlight and frame the value of iterative usability testing in the design lifecycle. Within this new metaphor, iterative testing is a form of feedback which is most effective and resource-efficient if done as often as practically possible with project resources and timelines in mind.
Towards the Design of Effective Formative Test Reports BIBAHTMLPDF 28-46
  Mary Theofanos; Whitney Quesenbery
Many usability practitioners conduct most of their usability evaluations to improve a product during its design and development. We call these "formative" evaluations to distinguish them from "summative" (validation) usability tests at the end of development. A standard for reporting summative usability test results has been adopted by international standards organizations. But that standard is not intended for the broader range of techniques and business contexts in formative work. This paper reports on a new industry project to identify best practices in reports of formative usability evaluations. The initial work focused on gathering examples of reports used in a variety of business contexts. We define elements in these reports and present some early guidelines on making design decisions for a formative report. These guidelines are based on considerations of the business context, the relationship between author and audience, the questions that the evaluation is trying to answer, and the techniques used in the evaluation. Future work will continue to investigate industry practice and conduct evaluations of proposed guidelines or templates.
Usability Testing of Travel Websites BIBAHTMLPDF 47-61
  Deborah S. Carstens; Pauline Patterson
A usability study was conducted to identify usability problems as well as recommendations for improvement for three travel sales websites. The study performed testing on twenty participants, between the ages of 19 and 65, recruited from the university campus consisting of students, faculty, and staff. The three websites tested were Expedia.com, Orbitz.com, and Travelocity.com.
   Each participant was given general instructions and a pre-survey to determine their demographics and level of Internet experience. The usability study tested participants on the task of finding the same itinerary on each travel website. The participant during testing was under observation of the experimenter that maintained an observation log. A post-survey along with a debriefing session was conducted to gather additional feedback. The average testing time for participants was 30 minutes. The results of this study are presented as well as a future research discussion consisting of the development of usability guidelines for designers of travel websites.

JUS 2006 Volume 1 Issue 2

Introduction BIBHTML i
  Avi Parush
Do Usability Expert Evaluation and Testing Provide Novel and Useful Data for Game Development? BIBHTMLPDF 64-75
  Sauli Laitinen
A Pattern Language Approach to Usability Knowledge Management BIBAHTMLPDF 76-90
  Michael Hughes
Knowledge gained from usability testing is often applied merely to the immediate product under test and then forgotten -- at least at an organizational level. This article describes a usability knowledge management system (KMS) based on principles of pattern language and use-case writing that offers a way to turn lessons learned from usability testing into organizational knowledge that can be leveraged across different projects and different design teams.
Empirical Evaluation of a Popular Cellular Phone's Menu System: Theory Meets Practice BIBAHTMLPDF 91-108
  Sheng-Cheng Huang; I-Fan Chou; Randolph G. Bias
A usability assessment entailing a paper prototype was conducted to examine menu selection theories on a small screen device by determining the effectiveness, efficiency, and user satisfaction of a popular cellular phone's menu system. Outcomes of this study suggest that users prefer a less extensive menu structure on a small screen device. The investigation also covered factors of category classification and item labeling influencing user performance in menu selection. Research findings suggest that proper modifications in these areas could significantly enhance the system's usability and demonstrate the validity of paper-prototyping which is capable of detecting significant differences in usability measures among various model designs.

JUS 2006 Volume 1 Issue 3

Introduction BIBHTML i
  Avi Parush
Can Collaboration Help Redefine Usability? BIBPDF 109-111
  Charles B. Kreitzberg
Using Eye Tracking to Compare Web Page Designs: A Case Study BIBAHTMLPDF 112-120
  Agnieszka Bojko
A proposed design for the American Society of Clinical Oncology (ASCO) Web site was evaluated against the original design in terms of the ease with which the right starting points for key tasks were located and processed. This report focuses on the eye tracking methodology that accompanied other conventional usability practices used in the evaluation. Twelve ASCO members were asked to complete several search tasks using each design. Performance measures such as click accuracy and time on task were supplemented with eye movements which allowed for an assessment of the processes that led to both the failures and the successes. The report details three task examples in which eye tracking helped diagnose errors and identify the better of the two designs (and the reasons for its superiority) when both were equally highly successful. Advantages and limitations of the application of eye tracking to design comparison are also discussed.
Case Study: Conducting Large-Scale Multi-User User Tests on the United Kingdom Air Defence Command and Control System BIBAHTMLPDF 121-135
  Elliott Hey
IBM was contracted to provide a new Air Defence Command and Control (ADCC) system for the Royal Air Force. The IBM Human Factors (HF) team was responsible for the design of the operations room, workstations and the graphical user interfaces. Because the project was safety-related, IBM had to produce a safety case. One aspect of the safety case was a demonstration of the operational effectiveness of the new system.
   This paper is an in-depth case study of the user testing that was carried out to demonstrate the effectiveness of the system. Due to time constraints the HF team had to observe five participants working simultaneously. Further, to provide a realistic operational environment, up to twenty-eight operators were required for each test. The total effort for this activity was four person-years. The paper will detail the considerations, challenges and lessons learned in the creation and execution of these multi-user user tests.
When 100% Really Isn't 100%: Improving the Accuracy of Small-Sample Estimates of Completion Rates BIBAHTMLPDF 136-150
  James R. Lewis; Jeff Sauro
Small sample sizes are a fact of life for most usability practitioners. This can lead to serious measurement problems, especially when making binary measurements such as successful task completion rates (p). The computation of confidence intervals helps by establishing the likely boundaries of measurement, but there is still a question of how to compute the best point estimate, especially for extreme outcomes. In this paper, we report the results of investigations of the accuracy of different estimation methods for two hypothetical distributions and one empirical distribution of p. If a practitioner has no expectation about the value of p, then the Laplace method ((x+1)/(n+2)) is the best estimator. If practitioners are reasonably sure that p will range between .5 and 1.0, then they should use the Wilson method if the observed value of p is less than .5, Laplace when p is greater than .9, and maximum likelihood (x/n) otherwise.

JUS 2006 Volume 1 Issue 4

World Usability Day: A Challenge for Everyone BIBPDF 151-155
  Elizabeth Rosenzweig
Culture and Usability Evaluation: The Effects of Culture in Structured Interviews BIBAHTMLPDF 156-170
  Ravi Vatrapu; Manuel A. Pérez-Quiñones
A major impediment in global user interface development is that there is inadequate empirical evidence for the effects of culture in the usability engineering methods used for developing these global user interfaces. This paper presents a controlled study investigating the effects of culture on the effectiveness of structured interviews in international usability evaluation. The experiment consisted of a usability evaluation of a website with two independent groups of Indian participants. Each group had a different interviewer; one belonging to the Indian culture and the other to the Anglo-American culture. The results show that participants found more usability problems and made more suggestions to an interviewer who was a member of the same (Indian) culture than to the foreign (Anglo-American) interviewer. The results of the study empirically establish that culture significantly affects the efficacy of structured interviews during international user testing. The implications of this work for usability engineering are discussed.
Animated Character Likeability Revisited: The Case of Interactive TV BIBAHTMLPDF 171-184
  Konstantinos Chorianopoulos
Animated characters have been a popular research theme, but the respective desktop applications have not been well-received by end-users. The objective of this study was to evaluate the use of an animated character for presenting information and navigating music videos within an interactive television (ITV) application. Information was displayed over music video clips with two alternative user interfaces: 1) semi-transparent information overlays, 2) an animated character. For this purpose, the differences between ITV and desktop computing motivated the adaptation of the traditional usability evaluation techniques. The evaluation revealed that users reported higher affective quality with the animated character user interface. Although the success of animated characters in desktop productivity applications has been limited, there is growing evidence that animated characters might be viable in a domestic environment for leisure activities, such as interactive TV.
The System Usability Scale and Non-Native English Speakers BIBAHTMLPDF 185-188
  Kraig Finstad
The System Usability Scale (SUS) was administered verbally to native English and non-native English speakers for several internally deployed applications. It was found that a significant proportion of non-native English speakers failed to understand the word "cumbersome" in Item 8 of the SUS (that is, "I found the system to be very cumbersome to use.") This finding has implications for reliability and validity when the questionnaire is distributed electronically in multinational usability efforts.