HCI Bibliography Home | HCI Journals | About IJMMS | Journal Info | IJMMS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJMMS Tables of Contents: 222324252627282930313233343536373839

International Journal of Man-Machine Studies 32

Editors:B. R. Gaines; D. R. Hill
Dates:1990
Volume:32
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:34
Links:Table of Contents
  1. IJMMS 1990 Volume 32 Issue 1
  2. IJMMS 1990 Volume 32 Issue 2
  3. IJMMS 1990 Volume 32 Issue 3
  4. IJMMS 1990 Volume 32 Issue 4
  5. IJMMS 1990 Volume 32 Issue 5
  6. IJMMS 1990 Volume 32 Issue 6

IJMMS 1990 Volume 32 Issue 1

Evaluation of Algorithms for Combining Independent Data Sets in a Human Performance Expert System BIBA 1-19
  Valerie J. Gawron; David J. Travale; Jeannette G. Neal; Colin G. Drury; Sara J. Czaja
As part of an ongoing program to develop a Computer Aided Engineering (CAE) system for human factors engineers, a Human Performance Expert System, Human, was designed. The system contains a large database of human-performance equations derived from human performance research reported in the open literature. Human accesses these data to predict task performance times, task completion probabilities, and error rates. A problem was encountered when multiple independent data sets were relevant to one task. For example, a designer is interested in the effects of luminance and font size on a number of reading errors. Two data sets exist in the literature: one examining the effects of luminance, the other, font size. The data in the two sets were collected at different locations with different subjects, and at different times in history. How can the two data sets best be combined to address the designer's problems?
   On the basis of an extensive review of the human performance literature and statistical procedures, four combining algorithms were developed. These four algorithms were tested in two steps. In step one, two reaction-time experiments were conducted: one to evaluate the effect of the number of alternatives on reaction times; the second, evaluated signals per minute and number of displays being monitored. The four algorithms were used on the data from these two experiments to predict reaction time in the situation where all three independent variables are manipulated simultaneously. In step two of the test procedure, a third experiment was conducted. Subjects who had not participated in either Experiment 1 or 2 performed a reaction-time task under the combined effects of all three independent variables. The predictions made from step one were compared to the actual empirical data collected in Experiment 3. The best predictor of the mean in Experiment 3 was an unweighted average of the means in Experiments 1 and 2; the best predictor of the standard deviation in Experiment 3 was an unweighted average of the standard deviations, (S.D.s) in Experiments 1 and 2. Based on these results, Human uses an average of the means to combine the results from multiple independent data sets.
Normalized Performance Ratio -- A Measure of the Degree to which a Man-Machine Interface Accomplishes Its Operational Objective BIBA 21-108
  Brian Moffat
A metric, referred to as the Normalized Performance Ratio (NPR), is defined. The NPR measures a quality of the man-machine interface (MMI) which profoundly influences its value to the human operator of the associated man-machine system. The value of an MMI's NPR is equal to the mean of the periods of time required by a group of people, varying in their familiarity with the interface's operation, to complete an identical processing task with the system (the mean of the completion times), divided by the sample standard deviation of those completion times.
   The potential variability among MMIs is infinite. However, all MMIs share a common operational objective, which is to facilitate an operator's ability to manipulate the MMI's associated processor. The value of an MMI's NPR is a measure of the degree to which that MMI satisfies that operational objective. It is asserted that the value of an MMI's NPR is independent of the complexity of the processing task(s) used for its measurement, and of the complexity of the MMI-processor system. The NPR would thus provide the basis for the unbiased comparison of all MMIs.
   A detailed description of the methodology with which an MMI's NPR may be measured is provided, along with illustrations of that methodology which are based on the analyses of the MMIs of actual man-machine systems. Existing MMI-evaluation methods are critically reviewed.
Response Assertiveness in Human-Computer Dialogues BIBA 109-117
  Paul Buchheit; Thomas Moher
This paper describes an attempt to determine expectations in human-computer dialogue through an experiment in which human subjects predict the responses of another human or a computer to natural language input. The experiment consists of a questionnaire based on discourse patterns that exhibit slight differences in their syntactic structures, and which, as a consequence, represent distinct levels of linguistic meaning. The specific speech acts defined by these levels: directives, assertions, requests, etc., are used as a basis of categorization for a series of multiple choice questions and answers which are designed to measure a subject's relative predispositions toward human and computer speakers. Results of the test are compared to intuitive expectations and then analysed in terms of potential application to a natural language processing system.

IJMMS 1990 Volume 32 Issue 2

Decision Analysis Techniques for Knowledge Acquisition: Combining Information and Preferences Using Aquinas and Axotl BIBA 121-186
  Jeffrey M. Bradshaw; John H. Boose
The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex preference issues, but it has been largely overlooked by knowledge-based system researchers. This paper illustrates the value of incorporating decision analysis insights and techniques into the knowledge acquisition and decision making process. This approach is being implemented within Aquinas, a personal construct-based knowledge acquisition tool, and Axotl, a knowledge-based decision analysis tool. The need for explicit preference models in knowledge-based systems will be shown. The modeling of problems will be viewed from the perspectives of decision analysis and personal construct theory. We will outline the approach of Aquinas and then present an example that illustrates how preferences can be used to guide the knowledge acquisition process and the selection of alternatives in decision making. Techniques for combining supervised and unsupervised inductive learning from data with expert judgment, and integration of knowledge and inference methods at varying levels of precision will be presented. Personal construct theory and decision theory are shown to be complementary: the former provides a plausible account of the dynamics of model formulation and revision, while the latter provides a consistent framework for model evaluation. Applied personal construct theory (in the form of tools such as Aquinas) and applied decision theory (in the form of tools such as Axotl) are moving along convergent paths. We see the approach in this paper as the first step toward a full integration of insights from the two disciplines and their respective repertory grid and influence diagram representations.
Fuzzy Windows and Classification Systems BIBA 187-201
  J. C. Santamarina; J. L. Chameau
A fuzzy sets based structure for classification systems is proposed in this paper. The basic idea is to use "windows" to represent the constraints on the possible values variables may take. The formalism is very simple, however, this simplicity makes it attractive in the development of knowledge based systems. The most salient features of this structure include the possibility of developing composite solutions, searching for lacunae, and creating a case based representation of knowledge with an avenue for modeling learning.
An Investigation of the Applicability of Expert Systems to Job Shop Scheduling BIBA 203-213
  Sabah U. Randhawa; Edward D. McDowell
Although the job shop scheduling (JSS) problem is of eminent practical importance and has received considerable attention by both industry and academia, because of its complexity, it still remains an enigma. Artificial Intelligence holds the potential for providing solutions to complex problems, such as the JSS problem. However, due to the magnitude of the JSS problem, search techniques are not computationally feasible. Expert Systems, though, have been successfully applied to problems of this type. The development of an expert system, however, implies the availability of an expert. Regrettably, such experts are not readily available in the JSS environment. The approach presented in this paper involves the use of computer simulation models of a job shop to train subjects so that they are capable of effective scheduling, and then extracting the knowledge of these "experts" to develop an expert system.
Foundations of Declarative Debugging in Arbitrary Logic Programming BIBA 215-232
  Song Yuan Yan
Declarative debugging is an interactive process where the debugger acquires knowledge from an oracle (usually the user) about the intended interpretation of a program to be debugged and uses it to compare with the machine interpretation of the program in order to find the error in that program. The debugging is declarative in the sense that the user need only know the logic aspect of a program (i.e. its declarative meaning) and does not need to consider the computational behaviour of the program. An error is found by checking the validity of the solved goals with respect to their intended interpretation. In this paper, we introduce the basic ideas and concepts of declarative debugging, and provide a theoretical foundation with emphasis on the study of soundness and completeness for declarative debugging in arbitrary (unrestricted) first order logic programming.
A Computerised Clinical Test of Forgetting Based on the ACT Model of Memory Retrieval BIBA 233-244
  David G. Andrewes; Dana Maude
The ACT model of interference (Anderson, 1976) was applied to a visual-search paradigm using an elderly population (N=22) 65-85 years, in order to develop a computerised clinical test of forgetting. The test is to be used to identify similarities and differences between etiologically-distinct amnesic populations on the basis of susceptibility to interference. A visual-search task manipulated the number of examples presented in association with a particular category. This was achieved by requiring the subject to search for a varied number of distractor examples with a target example. As predicted by the ACT model, increasing the number of distractors resulted in slower identification of the target item, as measured by increased recognition response latency. Also as predicted, increasing the number of distractors also increased the number of recognition errors. The interference effect produced by the distractors was reduced by strengthening the association between the target word and the category. This was achieved by presenting the target and category a second time in the presence of different distractors. The test's potential as an automated assessment device is discussed.

IJMMS 1990 Volume 32 Issue 3

Effect of CAD on the Jobs of Drafters and Engineers: A Quantitative Case Study BIBA 245-262
  Ann Majchrzak
Job characteristics and performance data for users and nonusers of a CAD system for mechanical design work were quantitatively compared. Results indicated that for engineers, CAD users experienced more interdependence on their job while, for drafters, CAD users experienced less discretion, creativity, and teamwork. These differences between users and nonusers were not related to individual performance. Implications for practitioners implementing CAD and researchers studying technological change are drawn.
Role of the Present in Temporal Representation in Artificial Intelligence BIBA 263-274
  Elzbieta Hajnicz
In this paper some philosophical considerations concerning Time are mentioned. The classical temporal logic is covered, but the main point of interest is an application of the notion of the present in artificial intelligence. Especially, methods of representation of now in different temporal structures are described.
Automatic Rule Generation by the Transformation of Expert's Diagram: LIFT BIBA 275-292
  Jae Kyu Lee; In Koo Lee; Hyung Rim Choi; Sung Mahn Ahn
To enhance the efficiency of revealing and refining an expert's knowledge, the Expert's Diagram approach is proposed. The Expert's Diagram proposed in this paper is specifically designed for rule-based consulting systems. Using the Expert's Diagram approach, the knowledge acquisition system LIFT is developed, which transforms the Expert's Diagram automatically to rules in the syntax of the shell SKI 2 that is developed for tax consulting-purposes. For the transformation, either the conclusion-directed approach or the condition-directed approach can be applied. The role of LIFT can be generalized to some extent to adapt to the changes in the Expert's Diagram and target shells. According to our experience in the acquisition of Korean corporate tax knowledge, experts could reveal their knowledge effectively using the Expert's Diagram after a short period of training. Thus the rules could be generated automatically by LIFT without the aid of knowledge engineers.
Soliciting Weights or Probabilities from Experts for Rule-Based Expert Systems BIBA 293-301
  Daniel E. O'Leary
Rule-based expert systems attach a weight to each rule in order to represent uncertainty or strength of association. There are a number of schemes that are used to represent uncertainty in expert systems. Some of these methods allow the system designer to solicit either the probabilities, used to compute the weights, or to solicit the weights directly, or both.
   This paper presents results that indicate that if the weights are gathered directly, rather than using probabilities, then the weights may not meet the underlying conditions of the mathematical model of uncertainty on which the weights are based or the weights may imply highly unusual behavior for the underlying probabilities and implicit utility function.
   In one system it is found that there were violations of the mathematical properties of the model in over forty percent of the weights on the rules of the system. If the weights do not meet the constraints of the underlying mathematical models then such violations may yield inappropriate parameterization of other weights in order to make the model work. Further, such violations can lead to an inappropriate estimation of the probabilities of events by the system and yield inappropriate inferred weights.
   In another case it was found that a system was dominated by weights that suggest highly unusual behavior for the underlying probabilities.
   From an operational perspective these inconsistencies indicate the importance of the method used in gathering the weights, e.g. indirectly through the probabilities or directly through the weights. It also indicates the importance of validating and verifying the weights to ensure that the weights meet the needs of the underlying theory and do not force unusual relationships onto the underlying probabilities.
No IFs, ANDs, or ORs: A Study of Database Querying BIBA 303-326
  Sharon L. Greene; Susan J. Devlin; Philip E. Cannata; Louis M. Gomez
The difficulty of expressing database queries was examined as a function of the language used. Two distinctly different query methods were investigated. One used a standard database query language, SQL, requiring users to express an English query using a formal syntax and appropriate combinations of boolean operators. The second used a newly designed Truth-table Exemplar-Based Interface (TEBI), which only required subjects to be able to choose examplars from a system-generated table representing a sample database. Through users' choices of critical examplars, the system could distinguish between interpretations of an otherwise ambiguous English query. Performance was measured by number correct, time to complete queries, and confidence in query correctness. Individual difference analyses were done to examine the relationship between subjects' characteristics and ability to express database queries. Subjects' performance was observed to be both better, and more resistant to variability in age and levels of cognitive skills, when using TEBI than when using SQL to specify queries. Possible reasons for these differences are discussed.
An Examination of Gender Differences in the Determinants of Computer Anxiety and Attitudes Toward Microcomputers Among Managers BIBA 327-340
  Saroj Parasuraman; Magid Igbaria
The study examined the determinants of computer anxiety and attitudes toward microcomputers among 166 managers employed in a variety of organizations. Results indicated that men and women in managerial positions do not differ in the level of computer anxiety reported, and are very similar in their attitudes toward microcomputers. However, gender differences were found in the pattern of relationships of demographic and personality variables with computer anxiety and microcomputer attitudes. For men, education and intuition-sensing were negatively related to computer anxiety, while age, external locus of control, and math anxiety were associated with heightened computer anxiety. In contrast, demographic and personality variables were unrelated to computer anxiety among women. Computer anxiety was the strongest predictor of attitudes toward microcomputers among both men and women. Among women, however, the feeling-thinking dimension of cognitive style, and math anxiety were additional determinants of microcomputer attitudes.
Using a Natural Language Interface with Casual Users BIBA 341-361
  Ruth A. Capindale; Robert G. Crawford
Although there is much controversy about the merits of natural language interfaces, little empirical research has been conducted on the use of natural language interfaces for database access, especially for casual users. In this work casual users were observed while interacting with a real-life database using a natural language interface, Intellect.
   Results show that natural language is an efficient and powerful means for expressing requests. This is especially true for users with a good knowledge of the database contents regardless of training or previous experience with computers. Users generally have a positive attitude towards natural language. The majority of errors users make are directly related to restrictions in the vocabulary. However, feedback helps users understand the language limitations and learn how to avoid or recover from errors. Natural language processing technology is developed enough to handle the limited domain of discourse associated with a database; it is simple enough to support casual users with a general knowledge of the database contents; and it is flexible enough to assist problem-solving behaviour.

IJMMS 1990 Volume 32 Issue 4

Towards a Consultative On-Line Help System BIBA 363-383
  Godwin M. Gwei; Eric Foxley
Most computer systems provide a help facility which offers users on-line assistance in response to a query. The explanations provided by most systems often concentrate on the syntax of commands, and the user is expected to cite exactly the name of the command for which help is required. Often, the vocabulary involved is restrictive (because exact citation is essential) and confusing (because different systems adopt different terms).
   This paper explores ways of improving on-line help systems
  • by developing user models to enable the explanations to match that user's
       particular needs and experience;
  • by categorising explanations in a way which enables more appropriate
       information to be presented; and
  • by incorporating an interface which provides users with the freedom to use
       natural language with a wide ranging vocabulary of their choice. The paper also describes the implementation of a help system which incorporates the listed facilities. In conclusion it outlines the prospects for developing a flexible vocabulary front-end to any command language.
  • Do They Know What They're Doing? An Evaluation of Word-Processor Users' Implicit and Explicit Task-Relevant Knowledge, and Its Role in Self-Directed Learning BIBA 385-398
      Pamela Briggs
    Many people teach themselves how to use word-processing systems, but how successful are they in their endeavor? This study investigates a number of theoretical and practical issues associated with self-directed learning. Users of differing experience were asked to perform a simple task, using an unfamiliar word-processing system. However, they were given no information about the new system, prior to task commencement, save information they explicitly requested. An analysis of users' questions revealed that only the most experienced had a suitable mental task description available to them. Others relied upon visible components of the task to cue their questioning strategy in a manner which suggested reliance upon a recognition, rather than a recall strategy. A clear dissociation was noted between users' procedural knowledge of a task, reflected in their performance ability; and their metaknowledge of the task, i.e. their awareness of what procedural knowledge would be required in order to complete the task. The implications of these findings for the design of user support systems, and for user modelling are discussed.
    On the Training of EDP Novices on the Personal Computer BIBA 399-421
      Jurgen Pilgrim
    Notions are developed on the training of employed EDP novices in the field of personal computing. A strategy and method for conducting PC training courses is introduced, taking into account the complicated starting situation of the current purposive computerization at the institutions. Based on the questioning of EDP novices who took part in these courses corresponding results of the study are commented on and discussed. The results show that the introduced method for imparting knowledge in the field of personal computing proves its worth.
    Modes in Non-Computer Devices BIBA 423-438
      Jeff Johnson
    The user-interfaces of several non-computer devices are examined for modes. The distinguishing features of these modes, the problems they cause and those they solve, possible ways in which the devices might be improved, and the implications for design of computer-based interactive systems are discussed.
    Effect of Modularity on Maintainability of Rule-Based Systems BIBA 439-447
      J. Steve Davis
    Software engineers have for many years employed modularity in conventional programming. Recently, Jacob and Froscher proposed a new method for achieving modularity in rule-based systems. We conducted the first experiments to evaluate the effect of their new method on the maintainability of rule-based systems. Our results were encouraging. Subjects who used a modular rule-based system tended to accomplish modifications more correctly and more quickly than those who used a non-modular version.
    Experimental Investigation of the Utility of Data Structure and E-R Diagrams in Database Query BIBA 449-459
      J. Steve Davis
    We empirically tested several graphical forms of database documentation to determine their utility in the performance of database queries. The data structure diagram and the entity-relationship diagram were shown to be helpful in performing queries on a relational database.
    The Nature and Development of Programming Plans BIBA 461-481
      Simon P. Davies
    The notion of the programming plan as a description of one of the main types of strategy employed in the comprehension of programs is now widely accepted to form an adequate basis for an account of programming knowledge. Such plans are thought to be used universally in all programming languages by expert programmers. Recent work, however, has questioned the psychological reality of such plans and has suggested that they may be artifacts of the particular programming language used and the structure that it imposes on the programmer via the constraints of certain features of its notation. This paper considers the results of two experimental studies that suggest that the development and use of programming plans is strongly tied to the particular learning experience of the programmer. It is argued that programming plans cannot be considered solely to be natural strategies that evolve independently of teaching nor as mere artifacts or static properties of a particular programming language. Rather, such plans can be seen to be related to the expression of design-related skills. This has a number of important implications for our understanding of the nature and development of programming plans, and in particular, it appears that the notion of the programming plan provides too limited a view to adequately and straightforwardly explain the differences between novice and the expert's programming performance.

    IJMMS 1990 Volume 32 Issue 5

    System Demands on Mental Models for a Fulltext Database BIBA 483-509
      Cecilia Katzeff
    The aim of the study was to investigate the relationship between mental models required by a database system, clues provided by the system to these models, and users' behaviour in operating the system. For this purpose a fulltext database system containing news articles about telecommunication and information technology was used. Ten non professional computer users participated in the study. The subjects' tasks were to retrieve and display certain articles and pieces of articles on the screen. By analysing knowledge needed to carry out these tasks, the required mental models could be identified. Then, through analysing the specific system clues to the required mental models, difficulties subjects would run into were predicted. The fulltext database system employed for the study operated on three different levels of display. That is, it operated as if it had three different modes, each level corresponding to a different mode. The clarity of clues to adequate mental models differed on the three levels. Most salient were clues concerning the order in which articles and pieces of articles were presented. These clues were least clear on the second level of display. As predicted this was also the level of display on which subjects' performance was worst (p<0.05). Among difficulties identified in subjects' think-aloud protocols, difficulties with the numbering system ("record numbers") were the most frequent on the second level. In contrast, clues concerning the order of articles were relatively clear on the first level of display. As predicted subjects performed well on this level with 90% correct responses. The central role of appropriate clues to adequate mental models is further illustrated by examples of subjects' mental model reasoning. In these examples three different phases of mental models are shown to exist -- a construction, a testing, and a running phase.
    User Models: Theory, Method, and Practice BIBA 511-543
      Robert B. Allen
    While the technology of new information services is rapidly advancing, it is not clear how this technology can be best adapted to people's needs and interests. One possibility is that user models may select and filter information sources for readers. This paper examines the prospects and implications of automatic filtering of information, and focuses on predicting preferences for news articles presented electronically. The results suggest that the prediction of preferences can be straightforward when general categories for news articles are used; however, prediction for specific news reports is much more difficult. In addition, an effort is made to establish a systematic study of the effectiveness of information interfaces and user models. Fundamental issues are raised such as techniques for evaluating user models, their essential components, their relationship to information retrieval models, and the limits of using them to predict user behavior at various levels of granularity. For instance, prediction and evaluation methodology may be adopted from personality psychology. Finally, several directions for research are discussed such as treating news as hypertext and integration of news with other information sources.
    Source Models for Natural Language Text BIBA 545-579
      Ian H. Witten; Timothy C. Bell
    A model of a natural language text is a collection of information that approximates the statistics and structure of the text being modeled. The purpose of the model may be to give insight into rules which govern how language is generated, or to predict properties of future samples of it. This paper studies models of natural language from three different, but related, viewpoints. First, we examine the statistical regularities that are found empirically, based on the natural units of words and letters. Second, we study theoretical models of language, including simple random generative models of letters and words whose output, like genuine natural language, obeys Zipf's law. Innovation in text is also considered by modeling the appearance of previously unseen words as a Poisson process. Finally, we review experiments that estimate the information content inherent in natural text.
    Clarifying the Distinction Between Lexical and Gestural Commands BIBA 581-590
      Palmer Morrel-Samuels
    A distinction is drawn between conventional lexical commands and gestural commands (e.g. circles, arrows, X's, etc.). The distinction is discussed in the context of a central metaphor that likens computer use to communication between programmer and user. A number of limitations and benefits unique to gestural interfaces are described. It is suggested that gestural commands tend to be terse, common, unambiguous, iconic, and similar to the spontaneous hand gestures that accompany speech. The potential effects of these five qualities are outlined by summarizing selected research from cognitive and social psychology. Some potential applications are also described.
    Discourse Theory and Interface Design: The Case of Pointing with the Mouse BIBA 591-602
      Eoghan Mac Aogain; Ronan Reilly
    An empirical study is reported in which a formal model of person-person discourse is applied to person-machine communication, with special reference to the use of the mouse for pointing at objects on the screen. Pointing with the mouse is compared with its natural-language equivalent, pointing with the finger or hand while speaking. Contrasting discourse structures are proposed for (1) pointing with the mouse which involves auditory signals, one from the user and an acknowledgment from the system, and (2) pointing with the mouse which consists of silent parking of the cursor, unacknowledged by the system. It is argued that the latter is more natural and leads to more "efficient" communication, as this is understood in Situation Semantics. This should lead to a lessening of keyboard input and a less complex discourse structure. The latter hypothesis was confirmed but not the former. A number of practical and theoretical implications are discussed.

    IJMMS 1990 Volume 32 Issue 6

    The Child as Naive User: A Study of Database Use with Young Children BIBA 603-625
      Janet Spavold
    A year-long project to study two groups of children aged between nine and eleven years old was undertaken in which the children compiled substantial databases of the 1881 Census material and subsequently interrogated it. The main aims of the study were to obtain information to provide guidance for teachers on the introduction of databases with young children, and to gain insight into the methods young children employed to understand database information in terms of their own experience. The children's reaction to menus and commands, and their ability to navigate around the database were noted. Attention was paid to their mental mapping and to the effectiveness with which they used the system.
    Computer-Mediated Communication System Network Data: Theoretical Concerns and Empirical Examples BIBA 627-647
      Ronald E. Rice
    The review combines two separate foci in recent research: (1) the diffusion and use of computer-mediated communication (CMC) systems in organizations, and (2) the conceptualization of communication as a process of interaction and convergence, as represented by the network paradigm. The article discusses (1) rationales for this combined focus based upon the characteristics of CMC systems, (2) application of the network paradigm to study CMC systems, (3) the collection samples, usage data, network flows, and content by CMC systems, (4) some theoretical issues that may be illuminated through analyses of data collected by CMC systems. The article concludes by discussing issues of reliability, validity and ethics.
    An Evaluation of Look-Ahead Help Fields on Various Types of Menu Hierarchies BIBA 649-661
      Robert J. Kreigh; Joseph F. Pesot; Charles G. Halcomb
    Look-ahead help fields were examined in a menu selection task using menus varying in depth and breadth. Experimental subjects received menu panels of specific menu alternatives plus help fields containing upcoming alternatives; whereas control subjects received only the specific menu alternatives. Subjects were permitted to navigate through the menu hierarchy, searching for targets. Results replicated earlier studies in terms of depth and breadth considerations. However, the addition of help fields did not enhance subject performance in any systematic fashion. Comparisons are made with previous findings in this area.
    Partitioned Frame Networks for Multi-Level, Menu-Based Interaction BIBA 663-672
      James D. Arthur
    Menu-based systems have continued to flourish because they present a simple interaction format that is adaptable to many diverse applications. The continued integration of menu-based interaction with increasingly sophisticated software systems, however, is resulting in complex, monolithic frame networks with several undesirable characteristics. This paper presents a novel approach to frame network construction and menu-based interaction for application systems that support user task specifications. The approach is based on partitioning the conventional, monolithic frame network into a set of hierarchically structured, disjoint networks that preserves the original network topology while reducing its overall complexity and size. By exploiting partitioned frame networks, menu-based interaction can support multiple levels of task specification. Initially, a task overview can be constructed without the user being encumbered by refinement details that could obscure the overall solution specification. Guided by the overview, subsequent interaction leads to a detailed refinement of the intended task specification.
    User Misconceptions of Information Retrieval Systems BIBA 673-692
      Hsinchun Chen; Vasant Dhar
    We report results of an investigation where thirty subjects were observed performing subjects-based search in an online catalog system. The observations have revealed a range of misconceptions users have when performing subject-based search. We have developed a taxonomy that characterizes these misconceptions and a knowledge representation which explains these misconceptions. Directions for improving search performance are also suggested.
    Sensitivity Analysis of Rough Classification BIBA 693-705
      Krzysztof Slowinski; Roman Slowinski
    Rough classification of patients after highly selective vagotomy (HSV) for duodenal ulcer is analysed from the viewpoint of sensitivity of previously obtained results to minor changes in the norms of attributes. The norms translate exact values of pre-operating quantitative attributes into some qualitative terms, e.g. "low", "medium" and "high". An extensive computational experiment leads to the general conclusion that original norms following from medical experience were well defined, and that the results of analysis of the considered information system using rough sets theory are robust in the sense of low sensitivity to minor changes in the norms of attributes.
    Structure and Mnemonics in Computer and Command Languages BIBA 707-722
      Mert Cramer
    Payne and Green have proposed the Task Action Grammar (TAG) as a formalism for the evaluation of command language organization. TAG is a competence model of command language organization which emphasizes the structural organization of the command language. That is, a group of commands all with the same form but differing values is predicted to be easier to use than one where each command of the group has a unique pattern of values. From the first, Payne and Green (1983, 1984) have used their experimental results at the University of Sheffield to illustrate the two level organization of TAG. The four test command languages they used were: "structure and mnemonics"; "structure only"; "mnemonics only"; and a version of the EMACS editor command language. The subject's recall performance for any of the first three languages was better than for the fourth. The original experiment was replicated at the University of Waterloo to gain a better view of user's capabilities for command language usage. At Waterloo a fifth language was added to the testing, "neither structure nor mnemonics" to complete the 2 x 2 block design. Contrary to the original results, n the replication, the structural factor was not a significant factor. In fact, the structure only language, the EMACS variant and the language with neither structure nor mnemonics were not significantly different. Considering only the languages with structure, the use of word abbreviation as mnemonic appears to be more effective than the use of graphical symbols.
       As TAG depends on the explanation of the importance of structure, this finding raises questions as to its utility. A categorization exercise given to the Waterloo subjects gave the only result which showed any influence of the structure factor. If TAG does provide a performance model of command languages, it appears to have much less influence on the user's performance than some types of mnemonics.
       The EMACS variant has more complex organization than any other of the test languages. The test results do not show that the subjects were able to use the clues provided. Payne and Green (1984, 1986), Carroll (1982) and Dixon (1987) have suggested possible explanations for this poor recall, but the current work leaves the question unresolved.