HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 151617181920212223242526272829303132333435

International Journal of Man-Machine Studies 25

Editors:B. R. Gaines; D. R. Hill
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Links:Table of Contents
  1. IJMMS 1986 Volume 25 Issue 1
  2. IJMMS 1986 Volume 25 Issue 2
  3. IJMMS 1986 Volume 25 Issue 3
  4. IJMMS 1986 Volume 25 Issue 4
  5. IJMMS 1986 Volume 25 Issue 5
  6. IJMMS 1986 Volume 25 Issue 6

IJMMS 1986 Volume 25 Issue 1

Dealing with a Database Query Language in a New Situation BIBA 1-17
  Cecilia Katzeff
The experiment presented constitutes an initial attempt to create a model of the end user of a database query language. The central issue is the cognitive behaviour reflected when a user faces a query condition which is similar to, but more complex than those conditions he has been taught to deal with. This behaviour is characterized by its problem-solving nature. In order to approach an understanding of the cognitive processes involved, a task analysis is undertaken. The instruction text to the query language is seen as a major ingredient of the task analysis. The analysis suggests that subjects may focus on different aspects of the instruction text. Data derived by recording the subjects "thinking aloud" provide support for this expectation. Four different routes to the solution of the "new" questions are identified.
A Descriptive/Prescriptive Model for Menu-Based Interaction BIBA 19-32
  James D. Arthur
As software systems continue to increase in sophistication and complexity, so do the interface requirements that support user interaction. To select the proper blend of ingredients that constitutes an adequate user interface, it is essential that the system designer has a firm understanding of the interaction process, i.e. how the selected dialogue format interacts with the user and with the underlying task software. One major approach to understanding the software design process and improving the quality of a product is through the use of models. The application of models to user/system interaction can provide the crucial feedback and innovative insights for designing and developing exemplary interactive systems. In this paper, we present one such model that describes as well as prescribes the critical elements for menu-based interaction and their interface dependencies. The model structure provides the flexibility for characterizing menu-based interactions that vary in levels of sophistication, and include (1) computational and decision capabilities based on task oriented actions, (2) user response reversal for error recovery, and (3) user directed movement. Finally, to illustrate the intrinsic power of our model, we present a "descriptive" narrative of two prominent menu-driven systems, Smalltalk and Zog, followed by a discussion of the model's "prescriptive" influence on the design and development of a third menu-based system, Omni.
Adaptive Command Prompting in an On-Line Documentation System BIBA 33-51
  M. V. Mason
An enhanced version of the UNIX on-line Programmer's Manual is available to users in the Department of Computer Studies at Leeds University. The system is, however, used by users with wide-ranging levels of experience and thus an adaptive interface to the system was regarded as an important feature. After briefly reviewing the role of adaptable and adaptive interfaces in human-computer interaction, this paper describes a technique called adaptive command prompting which is well suited to the Programmer's Manual application. The system is capable of automatically adjusting a set of prompts in order to suit the individual user interacting with it. The adaption is controlled by a set of heuristics modelled in terms of a production system.
Comparison of Rough-Set and Statistical Methods in Inductive Learning BIBA 53-72
  S. K. M. Wong; Wojciech Ziarko; R. Li Ye
Quinlan suggested an inductive algorithm based on the statistical theory of information originally proposed by Shannon. Recently Pawlak showed that the principles of inductive learning (learning from examples) can be precisely formulated on the basis of the theory of rough sets. These two approaches are apparently very different, although in both methods objects in the knowledge base are assumed to be characterized by "features" (attributes and attribute values). The main objective of this paper is to show that the concept of "approximate classification" of a set is closely related to the statistical approach. In fact, in the design of inductive programs, the criterion for selecting dominant attributes based on the concept of rough sets is a special case of the statistical method if equally probable distribution of objects in the "doubtful region" of the approximation space is assumed.
A Comparison of Menu Selection Techniques: Touch Panel, Mouse and Keyboard BIBA 73-88
  John Karat; James E. McDonald; Matt Anderson
Two studies were conducted to test user performance and attitudes for three types of selection devices used in computer systems. The techniques examined included on-screen direct pointing (touch panel), off-screen pointer manipulation (mouse), and typed identification (keyboard). Both experiments tested subjects on target selection practice tasks, and in typical computer applications using menu selection and keyboard typing. The first experiment examined the performance and preferences of 24 subjects. The second experiment used 48 subjects divided into two typing skill groups and into male-female categories. The studies showed performance advantages for on-screen touch panel entry. Preference ratings for the touch panel and keyboard devices depended on the type of task being performed, while the mouse was always the least preferred device. Differences between this result and those reporting an advantage of mouse selection are discussed.
Infinite-Valued Logic Based on Two-Valued Logic and Probability. Part 1.1. Difficulties with Present-Day Fuzzy-Set Theory and Their Resolution in the TEE Model BIBA 89-111
  Ellen Hisdal
A number of difficulties with present-day fuzzy-set theory is pointed out in order to justify the necessity of a modified approach to this theory. For each difficulty, its resolution in a modified approach, called TEE model, is outlined. The paper is therefore also a short survey of the TEE model theory for grades of membership. Superficially stated, this model interprets a membership value μ(uex) assigned by a subject to an object of attribute value uex as his estimate of the probability that the label λ would be assigned to that object in an LB (labeling) or YN (yes-no) experiment; e.g. by himself under nonexact conditions of observation; or by another subject.

IJMMS 1986 Volume 25 Issue 2

Infinite-Valued Logic Based on Two-Valued Logic and Probability. Part 1.2. Different Sources of Fuzziness BIBA 113-138
  Ellen Hisdal
It is claimed that a meaningful theory of the uncertainty or fuzziness connected with the assignment to objects of adjective labels, or of membership values concerning such labels, must be based on a semantic or physical model for the origins of the uncertainty in labeling or of the partial grade-of-membership or truth values. The mathematical formulas and operations will then follow from the model. The paper lists 14 different origins or sources of fuzziness. The first six sources #1a-3b are due to different variabilities, e.g. of conditions of observation or of the thresholds used by different subjects for classifying an object as λ (e.g. λ = tall) in an LB (labeling) or YN (yes-no) situation. Fuzziness #1b-3b are due to actually existing variabilities in the given situation. They give rise to non-unique membership values. Fuzziness #1a-3a give rise to the membership concept itself. They are due to anticipation by the subject of variabilities. Partial membership values are interpreted as estimates by the subject of the probability that the object would be labeled {lambda} in an LB (labeling) or YN (yes-no) situation. Fuzziness #4-11 give rise to ambiguous membership values just like #1b-3b. However the ambiguity in the membership values can be removed by a more precise definition of the situation or the reasoning procedure to which the subject refers. For fuzziness #1b-3b we can define expected membership values for objects of a given exact attribute value uex.
Computer Decision Support for Senior Mangers: Encouraging Exploration BIBA 139-152
  Tim Smithin; Colin Eden
This paper discusses issues involved in designing a computer-based decision support system for senior decision makers in business organizations. It is based on the authors' experiences of developing and using such a system over several years in a number of large U.K. companies. The paper focusses upon the nature of decision-making for senior managers and emphasizes the highly political and turbulent environment in which they work, and the implications this has for designing a system which can compete for the attention of a busy manager. Rather than describe "another system" we are anxious to discuss the problems involved in creating decision support systems that will be practical, and assist with decisions that "really matter".
Instructionless Learning about a Complex Device: The Paradigm and Observations BIBA 153-189
  Jeff Shrager; David Klahr
In order to study the mechanisms that underlie "intuitive" scientific reasoning, verbal protocols were collected from seven computer-naive college students asked to "figure out" a Big Trak programmable toy, without a user's guide or other assistance. We call this paradigm Instructionless Learning. The present paper presents a detailed account of how people learn about a complex device in an instructionless-learning context. Subjects' behavior is divided into an orientation phase and a systematic phase. We attend most carefully to the systematic phase. Learners form hypotheses about various aspects of the Big Trak: the syntax of interaction, the semantics of operators, and the device model -- which includes objects such as memories, switches, etc. Subjects attempt to confirm hypotheses from which predictions can be made, to refine hypotheses that do not immediately yield predictions, and to verify their total knowledge of the device. Hypotheses are formulated from observation. If an initial hypothesis is incorrect, it will yield incorrect predictions in interactions. When such failures occur, learners change their theory to account for the currently perceived behavior of the device. These changes are often based upon little evidence and may even be contradicted by available information. Thus, the new hypotheses may also be incorrect, and lead to further errors and changes.
On the Applied Use of Human Memory Models: The Memory Extender Personal Filing System BIBA 191-228
  William P. Jones
The benefits of electronic information storage are enormous and largely unrealized. As its cost continues to decline, the number of files in the average user's personal database may increase substantially. How is a user to keep track of several thousand, perhaps several hundred thousand, files? The Memory Extender (ME) system improves the user interface to a personal database by actively modeling the user's own memory for files and for the context in which these files are used. The ME system is similar, in many respects, to current spreading activation, network models of human memory. Files are multiply indexed through a network of variably weighted term links. Context is similarly represented and is used to minimize the user input necessary to specify a file unambiguously -- either for purposes of storage or retrieval. Files are retrieved through a spreading-activation-like process. The system aims toward an ideal in which the computer provides a natural extension to the user's own memory.
Cognitive Layouts of Windows and Multiple Screens for User Interfaces BIBA 229-248
  Kent L. Norman; Linda J. Weldon; Ben Shneiderman
In order to make computers easier to use and more versatile many system designers are exploring the use of multiple windows on a single screen and multiple coordinated screens in a single work station displaying linked or related information. The designers of such systems attempt to take into account the characteristics of the human user and the structure of the tasks to be performed. Central to this design issue is the way in which the user views and cognitively processes information presented in the windows or in multiple screens. This paper develops a theory of the "cognitive layout" of information presented in multiple windows or screens. It is assumed that users adopt a cognitive representation or layout of the type of information to be presented and the relationships among the windows or screens and the information they contain. A number of cognitive layouts are derived from theories in cognitive psychology and are discussed in terms of the intent of the software driving the system and congruence with the cognitive processing of the information. It is hypothesized that the particular layout adopted by a user will drastically affect the user's understanding and expectation of events at the human-computer interface and could either greatly facilitate or frustrate the interaction. Ways of ensuring the former and avoiding the latter are discussed in terms of implementations on exiting multiple-window and multiple-screen systems.

IJMMS 1986 Volume 25 Issue 3

Touch-Sensitive Screens: The Technologies and Their Application BIBA 249-269
  J. A. Pickering
Touch sensing on the surface of a computer display screen is a means of capturing man's natural pointing instincts and using it as mode of human-computer communication. The "touch-sensitive screen" is gradually becoming more popular and it is vital that the individual characteristics of the technologies are understood in order to produce a successful interface. This review covers the methods used for touch sensing on displays, their modes of use and how well they might meet application designers' and users' expectations. None of the technologies available at present are "ideal", particularly for the casual computer user, though they still can provide very successful interfaces if their particular properties are fully taken into account in the target application.
A Phonemic Transcription Program for Polish BIBA 271-293
  Jan Chomyszyn
In this article, an efficient computer program for orthographic-to-phonemic translation of Polish language texts is presented. Translation rules, which had been worked out by Steffen-Batogowa, and were used in the program, are briefly described.
   Such a program, capable of translating orthographic texts to phonemic form automatically, can be applied both in linguistic researches and in the course of works which tend towards achieving a voice communication between man and computer. The program has been implemented in connection with the work on the automation of the "microphonemic" method of speech synthesis, carried out in the Institute of Informatics at Warsaw University.
   A considerable number of very detailed translation rules for the Polish language would cause the program to be non-effective, if they were used directly in its text. It is possible, however, to avoid the difficulties by applying the inner computer representation of the rules, as presented in the article.
A Framework for Investigating Language-Mediated Interaction with Machines BIBA 295-315
  Magdalena Zoeppritz
The models of language that underlie most programming and query language experimentation are generally too weak to deal adequately with the linguistic processes involved. A more powerful model can be derived from linguistic theory, that supplements the formal description of languages by describing their communicative aspects as well. Such a model is outlined in the paper and its components are exemplified by corresponding phenomena in natural languages and programming languages.
Criteria for the Selection of Search Strategies in Best-Match Document-Retrieval Systems BIBA 317-326
  Fiona M. McCall; Peter Willett
It has been suggested that document-retrieval systems should have a range of different search strategies available which may be used in response to different types of query. This raises the question of what criteria should be used to select a strategy for use with an individual query. This paper considers statistical characteristics of queries that may be used to choose between clustered and non-clustered searches in best-match retrieval systems. The results suggest that such statistical information is not a good basis for the selection of search strategies.
Learning Computer Programming Through Dynamic Representation of Computer Functioning: Evaluation of a New Learning Package for Pascal BIBA 327-341
  Leonard Goodwin; Mohammad Sanati
This paper describes and evaluates a new approach to teaching the beginning Pascal programming course at Worcester Polytechnic Institute. At the heart of this approach is a new computer learning package, called PASLAB, which allows students to understand what is happening inside the computer relative to statements in Pascal programs constructed by an expert.
   There were 322 students participating in the evaluation under traditional conditions in 1984, and 296 students participating under the new conditions in 1985. Comparison between the two groups was made by examining the respective regression equations predicting final grade for each set of students. While the characteristics of both sets of students were very similar, the regression equations were markedly different.
   Under traditional conditions, background characteristics of students, including programming experience, had a major impact on the grades those students received. With Paslab in place, the impact of those background characteristics decreased while the importance of psychological orientations (motivation) increased. There were no differences in performance with respect to gender.
   Further research is needed to test the impact of Paslab in non-technical institutions. The positive results of this study suggest the fruitfulness of developing additional computer learning packages on the same principles as Paslab to be used in other computer-science courses as well as in other fields.
Transitive Closures of Fuzzy Thesauri for Information-Retrieval Systems BIBA 343-356
  James C. Bezdek; Gautam Biswas; Li-ya Huang
In this paper we represent a thesaurus (R) for an information system as the sum of two fuzzy relations, S(synonyms) and G(generalizations). The max-star completion of R is defined as R*, the max-star transitive closure of R. We interpret R*, which extends the concept-pair fuzzy relation R initially provided by an expert, as a linguistic completion of the thesaurus. Six max-star completions, corresponding to six well-known T-norms, are defined, analysed, and numerically illustrated on a nine-term dictionary. The application of our results in the context of document retrieval is this: one may use R* as a means of effecting replacements of terms appearing in a natural-language document request. The weights (R*)ij can be used to diminish or increase one's confidence in the degree of support being developed for each document considered relevant to a given query. The ijth element of R* can be regarded as the ultimate extent to which term j can be "reached" from term i; the values in R* thus represent degrees of confidence in max-star transitive chains.

IJMMS 1986 Volume 25 Issue 4

Arithmetic and Other Operations on Dempster-Shafer Structures BIBA 357-366
  Ronald R. Yager
We show how variables whose values are represented by Dempster-Shafer structures can be combined under arithmetic operations such as addition. We then generalize this procedure to allow for the combination of these types of variables under more general operations. We note that Dempster's rule is a special case of this situation under the intersection operation.
General Reconstruction Characteristics of Probabilistic and Possibilistic Systems BIBA 367-397
  George J. Klir; Behzad Parviz
The subject of this paper is a general empirical study of the reconstruction problem -- one of two complementary problems of reconstructability analysis. After a brief overview of relevant concepts, the paper contains a description of the computer experiments performed and the experimental results obtained. These experiments were performed for systems conceptualized in terms of either probability theory or possibility theory. Contrary to all previous experiments in the area of reconstructability analysis, which were based on systems perfectly reconstructable from some subsystems, experiments described here are not idealized in this sense. The significance of the reconstruction principle of inductive reasoning, whose validity is confirmed by the experimental results obtained, is also discussed.
Effects of Experience and Comprehension on Reading Time and Memory for Computer Programs BIBA 399-409
  Albert L. Schmidt
The present study investigated the effects of experience and comprehension on reading time, recall and recognition memory for computer programs. Twenty computer science students, who varied along the dimension of experience, were presented with two lists of PL/I computer programming statements. List 1 was a meaningful program and contained programming constructs which all the programmers had used. List 2 was a random list of PL/I statements. Each list was presented four times. More-experienced programmers read the meaningful program faster than less-experienced programmers. Comprehension was positively correlated with recall of the meaningful program. Results from recognition tests revealed a strong positive correlation between comprehension and recognition of meaningful units. Neither experience nor comprehension were related to performance with random information. These results are discussed in terms of elaboration, the concept of knowledge compilation (Anderson, 1982) and the contribution of experience and comprehension to performance.
Cognitive Attributes: Implications for Display Design in Supervisory Control Systems BIBA 411-438
  Elizabeth D. Murphy; Christine M. Mitchell
Based on a review of the literature, a cognitive model of human information processing is presented. The model synthesizes several perspectives with the intent of suggesting guidelines for human-computer interface designers of supervisory control systems. Given this model, the paper identifies 18 attributes of cognition that are particularly relevant to information display design and real-time decision-making. The discussion of each attribute of cognition has four components. First, each cognitive attribute is defined based on current interpretations in the cognitive-psychology literature. Next, given traditional design approaches, likely negative outcomes of automation as they affect the cognitive attribute are identified. Third, given the hypothesized effects, improvements in conventional design are suggested. Finally, the discussion of each cognitive attribute concludes with an example drawn from existing command and control environments. The paper is intended to provide a well-defined and coherent background for empirical research exploring alternative strategies of human-computer interface design for decision-makers in supervisory control systems.
Equal Opportunity Interactive Systems BIBA 439-451
  Colin Runciman; Harold Thimbleby
One view of interactive computer systems is that the user, having problems to solve, supplies the "givens" of these problems to the machine, which in response supplies as output the "unknowns". Reassigning or discarding these labels "givens" and "unknown" is a time-honoured heuristic for problem-solving. Also, people seem to prefer interpretations without such labels for fast interactive systems, and mere speed in systems that do embody fixed distinctions between input and output often contributes little towards ease of use -- it may only serve to emphasize a frustrating mechanical dumbness. We therefore apply the same heuristic to the design of interactive computer systems, noting that a number of existing successful interactive system styles can be viewed as the outcome if this approach.
A Neuropsychological Test Battery for the Apple II-E BIBA 453-467
  M. C. Moerland; A. P. Aldenkamp; W. C. J. Alpherts
The computerized part of a neuropsychological test battery for the assessment of cognitive functions in epilepsy is described. The test module forms the central part of the software. The report and research modules work on the data generated by the test module. The upper layer of the test module is formed by the Test Operating System (TOS), which enables flexible and fast shifting of tests and permits optimum use of memory and assembler routines. Tests currently in use are described and quantitative as well as qualitative improvements of assessment procedures are illustrated. Accurate timing of stimulus-and-response events is seminal to several of these improvements and may be of special importance for research in epilepsy. Several problems which may arise and accumulate from the current lack of standardization of hard- and software design in the research on automated psychological testing are summarized.

IJMMS 1986 Volume 25 Issue 5

Is Top-Down Natural?: Some Experimental Results from Non-Procedural Languages BIBA 469-478
  G. R. Finnie
Non-procedural financial models can be developed in either a top-down or bottom-up fashion. Experiments with novice users of these packages indicate that, despite some encouragement, few individuals apply a naturally top-down technique. Knowledge of procedural languages such as COBOL further inhibits the use of the top-down approach.
Considerations of Menu Structure and Communication Rate for the Design of Computer Menu Displays BIBA 479-489
  Norwood Sisson; Stanley R. Parkinson; Kathleen Snowberry
A design methodology is presented for configuring computer menu displays to optimize total execution time for simple menu selection. Communication times and operator times were combined to obtain total execution times for four menu structures spanning the breadth/depth spectrum for 64 items. The broadest menu (64 items in one frame) was favored at the most rapid communication rate (960 characters per s) whereas menus with intermediate values of breadth/depth (eight items per frame at two levels and four items per frame at three levels) yielded the fastest execution times at slower communication rates.
An Experiment in Graphical Perception BIBA 491-501
  William S. Cleveland; Robert McGill
Graphical perception is the visual decoding of categorical and quantitative information from a graph. Increasing our basic understanding of graphical perception will allow us to make graphs that convey quantitative information to viewers with more accuracy and efficiency. This paper describes an experiment that was conducted to investigate the accuracy of six basic judgments of graphical perception. Two types of position judgments were found to be the most accurate, length judgments were second, angle and sloped judgments were third, and area judgments were last. Distance between judged objects was found to be a factor in the accuracy of the basic judgments.
Graphical Display of Complex Information within a Prolog Debugger BIBA 503-521
  Alan D. Dewar; John G. Cleary
An interactive Prolog debugger, DEWLAP, is described. Output from the debugger is in the form of graphical displays of both the derivation tree and the parameters to procedure calls. The major advantage of such displays is that they allow important information to be displayed prominently and unimportant information to be shrunk so that it is accessible but not distracting. Other advantages include the following: the control flow in Prolog is clearly shown; the control context of a particular call is readily determined; it is easy to find out whether two uninstantiated variables are bound together; and very fine control is possible over debugging and display options. A high level graphics language is provided to allow the user to tailor the graphical display of data structures to particular applications. A number of issues raised by the need to update such displays efficiently and to control their perceived complexity are addressed. The DEWLAP system is implemented in Prolog on relatively standard hardware with a central processor running Unix and remote workstations with bit-mapped displays and mice.
Representing Quantitative and Qualitative Knowledge in a Knowledge-Based Storm-Forecasting System BIBA 523-547
  Renee Elio; Johannes de Haan
METEOR is a rule- and frame-based system for short-term severe storm forecasting. Initial predictions are based on interpretations of contour maps generated by statistical predictors of storm severity. To confirm these predictions, METEOR considers additional quantitative measurements, ongoing meteorological conditions and events, and how the expert forecaster interprets these factors. Meteorological events are derived from interpreting human observations of weather conditions in the forecast area. This task requires a framework that supports inferences about the temporal and spatial features of meteorological activities. To accommodate the large amounts of different types of knowledge characterizing this problem, a number of extensions to the rule and frame representations were developed. These extensions include a view scheme to direct property inheritance through intermingled hierarchies and the automatic generation of production system rules from descriptions stored in frames on an as-needed basis.
A Perceptual Study of the Flury-Riedwyl Faces for Graphically Displaying Multivariate Data BIBA 549-555
  Geert De Soete
An empirical study was performed in order to obtain information needed to optimize a particular graphical display that is used for representing multivariate data. More specifically, the perceptual salience of the features of the schematic faces proposed by Flury & Riedwyl was investigated. It was found that while some features were highly salient, others had little or no effect on the facial perception. It is argued that this information can be used to tailor the graphical display to the perceptual system of the user.
Which Way to Computer Literacy, Programming or Applications Experience? BIBA 557-572
  Barbee T. Mynatt; Kirk H. Smith; Anita L. Kamouri; Teresa A. Tykodi
The majority of computer literacy courses include a hands-on component in which either programming or the use of applications packages is taught. To find out whether one type of experience provides a better basis for transfer to new computer situations, students in an introductory computer course were randomly assigned to a group that studied either BASIC programming or applications packages. All students attended the same lectures and read the same text. After 6 weeks both groups were observed under controlled conditions learning how to use a literary database system and a new programming environment. Compared with students with no computer experience, both groups displayed significant positive transfer on the applications test, but did not differ from each other. On the test of transfer to a new programming environment, the students who had studied programming displayed a small but significant advantage over those who had studied applications.
Online Library Catalog Systems: An Analysis of User Errors BIBA 573-592
  Beverly Janosky; Philip J. Smith; Charles Hildreth
A study of 30 first-time users of LCS, the online library catalog system at the Ohio State University, was conducted. Subjects were provided with the online and offline help available to users of this system and were asked to conduct four standard searches (author, title, subject, etc.). While conducting the online searches, the subjects were asked to "think aloud".
   A detailed analysis of errors and associated verbal protocols provided insights into the design features and mental processes contributing to the commission of errors. Of particular significance are:
  • (1) the profound influence that incorrect mental representations have on the
        viewing and interpretation of online and offline help and instructions;
  • (2) the snowballing effects of a misconception as a user tries to seek and
        interpret additional information in attempts to recover from an error.
  • Arabic Language Parser BIBA 593-611
      Saad A. Mehdi
    This paper describes a computer system for syntactic parsing of Arabic sentences. It contains a word analyser and a syntactic parser based on Definite Clause Grammars (DCG) formalism. The system has been written in Prolog. An introduction to the Arabic language and its features is included.

    IJMMS 1986 Volume 25 Issue 6

    Toward a General Theory of Reasoning with Uncertainty. Part II: Probability BIBA 613-631
      Ronald R. Yager
    We look at procedures for representing information which has a probabilistic as well as a possibilistic component associated with it. Use is made of the theory of approximate reasoning as well as the Dempster-Shafer theory.
    Novices on the Computer: A Review of the Literature BIBA 633-658
      Carl Martin Allwood
    Literature dealing with cognitive aspects of novices' use of computers is reviewed. Many of the conclusions drawn in cognitive psychology about differences between novices and experts are supported also in the computer domain. Novices have less, and more fragmented, knowledge, spend less time encoding the task and do so in a way that is more determined by the surface features of the problem or information given, compared with experts. Novices in general make more errors and have greater difficulties finding them than experts. Other studies show that novices have difficulties in taking advantage of aid given by advisors, computer programs and other sources of information. The role of models and analogical thinking when learning to interact with a computer is discussed, suggestions are given as to how novices' difficulties could be alleviated and topics for future research are proposed.
    What "Question-Asking Protocols" Can Say about the User Interface BIBA 659-673
      Takashi Kato
    To make computer systems easier to use, we are in need of behavioral data which enable us to pinpoint what specific needs and problems users may have. Recently, the "thinking-aloud protocol" method was adopted as a technique for studying user behaviours in interactive computer systems. In the present paper, the "question-asking protocol" method is proposed as a viable alternative to the thinking-aloud method where the application of the latter is difficult or even inappropriate. It is argued that question-asking protocols shed light on (1) what problems users experience in what context, (2) what instructional information they come to need, (3) what features of the system are harder to lean, and (4) how users may come to understand or misunderstand the system.
    An Experiment to Test User Validation of Requirements: Data-Flow Diagrams vs Task-Oriented Menus BIBA 675-684
      John T. Nosek; Judith D. Ahrens
    A major concern for systems developers is the problem of user validation of system requirements. To the extent that the requirements representation elicits important feedback from the user, the representation technique can be said to be appropriate for user validation. Moran suggests that users develop a conceptual model of a system defined as the "knowledge that organizes how the system works and how it can be used to accomplish tasks". An experiment is conducted to determine whether a task-oriented, downward-cascading menu representation (similar to what the user would find in a prototyping environment), which closely corresponds to Moran's idea of a user's conceptual model, permits better comprehension and therefore better validation than data-flow diagrams, which have been strongly advocated as facilitating user validation. A within-group, counterbalancing technique to mitigate subject variability was used to test subjects' comprehension of data-flow diagrams vs the task-oriented menu representations of requirements. Subjects scored significantly higher on the test using the menu representation, indicating that task-oriented menus may be more effective in user validation of requirements than data-flow diagrams.
    Aptitude for Computer Literacy BIBA 685-696
      Dona M. Kagan; Leah Rose Pietron
    Are cognitive abilities that were related to achievement in an introductory course in computer programming also related to achievement in a computer literacy course? Correlations between scores on a programming aptitude test and scores obtained on three exams given in an undergraduate course that taught the use of software were examined. A secondary purpose of the study was to investigate correlations between several personality measures and exam scores. Subjects were 90 college students enrolled in "Computers in Business". Once achievement was controlled for basic verbal and numerical aptitude, the programming aptitude test did not significantly predict scores on any of the course exams. Thus, the task of learning to use software seemed to be analogous to any other academic learning and could be viewed as simply another application of generic cognitive abilities. Two variables, ownership of a personal computer and scores on a personality test measuring a concern for detail and perfectionism, were conducive to achievement early in the course, while experience in a prior programming course was related to achievement later in the course.
    Beacons in Computer Program Comprehension BIBA 697-709
      Susan Wiedenbeck
    In programming, beacons are lines of code which serve as typical indicators of a particular structure or operation. This research sought evidence for the existence and use of beacons in comprehension of a sort program. In the first experiment, subjects memorized and later recalled the whole sort program. Experienced programmers, but not novices or intermediates, recalled the beacon lines much better than non-beacon lines. In the second experiment, experienced programmers studied the same program and then were asked to recall several isolated parts of it. They did not know in advance that they would be asked to recall. Subjects recalled the beacon much better than non-beacon parts. They also were more certain that they recalled the beacon correctly. The results of both experiments support the idea that beacons exist as a focal point for study and understanding of programs by experienced programmers.
    Computer Anxiety: Sex, Race and Age BIBA 711-719
      Faith D. Gilroy; Harsha B. Desai
    Two studies are described in this article. The first examined antecedents of computer anxiety and the second exposed two groups of subjects who had no previous experience with computers to two treatments designed to lower the anxiety.
       Results indicated that for those persons with high anxiety an English composition course treatment in which students used word-processing as a tool was more significantly effective than was a course in computer programming in reducing computer anxiety. The programming course, however, was significantly more effective in reducing anxiety than was no treatment. Women were represented more often than men in the high-anxiety conditions. Results are discussed in terms of appropriate training techniques in educational and workplace environments to lower anxiety in vulnerable populations so that all might participate in the technological revolution.