HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 414243444546474849505152535455565758596061

International Journal of Human-Computer Studies 51

Editors:B. R. Gaines
Dates:1999
Volume:51
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:55
Links:Table of Contents
  1. IJHCS 1999 Volume 51 Issue 1
  2. IJHCS 1999 Volume 51 Issue 2
  3. IJHCS 1999 Volume 51 Issue 3
  4. IJHCS 1999 Volume 51 Issue 4
  5. IJHCS 1999 Volume 51 Issue 5
  6. IJHCS 1999 Volume 51 Issue 6

IJHCS 1999 Volume 51 Issue 1

Editorial: Electronic Submission of Papers to IJHCS BIB 1
  B. R. Gaines
Introduction to the Special Issue "Best of Empirical Studies of Programmers 7" BIB 3-5
  Susan Wiedenbeck; Jean Scholtz
Mental Imagery in Program Design and Visual Programming BIBA 7-30
  Marian Petre; Alan F. Blackwell
There is widespread anecdotal evidence that expert programmers make use of visual mental images when they are designing programs. This evidence is used to justify the use of diagrams and visual programming languages during software design. This paper reports the results of two studies. In the first, expert programmers were directly questioned regarding the nature of their mental representations while they were engaged in a design task. This investigative technique was used with the explicit intention of eliciting introspective reports of mental imagery. In the second, users of a visual programming language responded to a questionnaire in which they were asked about cognitive processes. The resulting transcripts displayed a considerable number of common elements. These suggests that software design shares many characteristics of more concrete design disciplines. The reports from participants in the two studies, together with previous research into imagery use, indicate potential techniques for further investigation of software development support tools and design strategies.
Program Understanding Behavior During Corrective Maintenance of Large-Scale Software BIBA 31-70
  A. Marie Vans; Anneliese Von Mayrhauser; Gabriel Somlo
This paper reports on a software understanding field study of corrective maintenance of large-scale software. Participants were professional software maintenance engineers. The paper reports on the general understanding process, the types of actions programmers preferred during the debugging task, the level of abstraction at which they were working and the role of hypotheses in the debugging strategies they used. The results of the observation are also interpreted in terms of the information needs of these software engineers. We found that programmers work at all levels of abstraction (code, algorithm, application domain) about equally. They frequently switch between levels of abstraction. The programmers' main concerns are with what software does and how this is accomplished, not why software was built a certain way. These questions guide the work process. Information is sought and cross-referenced from a variety of sources from application domain concepts to code-related information, outpacing current maintenance environments' capabilities which are mostly stratified by information source, making cross-referencing difficult.
Novice Comprehension of Small Programs Written in the Procedural and Object-Oriented Styles BIBA 71-87
  Susan Wiedenbeck; Vennila Ramalingam
This research studied the comprehension of small procedural and object-oriented programs by novice programmers. The objective was to find out what kinds of information novice programmers extract from small programs and to infer from this the mental representation formed during program comprehension. In particular, the question was whether novices' mental representations focus more on domain-level or program-level knowledge and whether the mental representation of object-oriented program differ from procedural programs. The experiment indicated that novices tend to develop a mental representation of small object-oriented programs strong in function-related knowledge, but weaker in data flow and program-related knowledge. By contrast, novices' mental representations of small procedural programs were stronger in program-related knowledge. The results are discussed in terms of theories of program comprehension and programming pedagogy.
Comparison of Visual and Textual Languages Via Task Modeling BIBA 89-115
  Marian G. Williams; J. Nicholas Buehler
In order for comparative studies of programming languages to be meaningful, differences between the languages need to be carefully studied and well understood. Languages that appear to differ only in syntax (for example, visual vs. textual syntax) may in fact differ greatly in usability. Such differences can confound comparative studies unless they are controlled for. In this paper, we examine the usefulness of fine-grained task modeling for studying the usability of programming languages. We focus on program entry, and demonstrate how to create models of program entry tasks for both visual and textual languages. We also demonstrate how to derive performance time estimates from the models using keystroke-level analysis. A by-product of the model building is a collection of functional-level models that can serve as building blocks for modeling higher-level visual programming tasks. We then report on a comparative study of languages with the same semantics but different syntax (visual and textual). Model-based time predictions of program entry tasks were compared to observed times from an empirical study. The time estimates for the visual condition greatly overestimated the observed times. The primary source of the overestimates appeared to be the time estimate for pointing with the mouse. We then look at three different approaches to improving program entry models. We report on a separate study to calibrate the mouse-pointing time estimate, and demonstrate improved correlation between predicted and observed times with the new estimate. We also apply task modeling to program editing activities, in order to model error recovery behavior during program entry. Finally, we discuss language-specific customization of the keystroke-level operator for mental preparation. We conclude that task modeling is a useful technique for studying differences in the usability of programming languages at the keystroke level.

IJHCS 1999 Volume 51 Issue 2

Editorial: 30th Anniversary Issue BIB 119-124
 
Psychological Evaluation of Two Conditional Constructions Used in Computer Languages BIBA 125-133
  M. E. Sime; T. R. G. Green; D. J. Guest
There is a need for empirical evaluation of programming languages for unskilled users, but it is more effective to compare specific features common to many languages than to compare complete languages. This can be done by devising micro-languages stressing the feature of interest, together with a suitable subject matter for the programs. To illustrate the power of this approach two conditional constructions are compared: a nestable construction, like that of Algol 60, and a branch-to-label construction, as used in many simpler languages. The former is easier for unskilled subjects. Possible reasons for this finding are discussed.
Note: Received August 17, 1972
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller BIBA 135-147
  E. H. Mamdani; S. Assilian
This paper describes an experiment on the "linguistic" synthesis of a controller for a model industrial plant (a steam engine). Fuzzy logic is used to convert heuristic control rules stated by a human operator into an automatic control strategy. The experiment was initiated to investigate the possibility of human interaction with a learning controller. However, the control strategy set up linguistically proved to be far better than expected in its own right, and the basic experiment of linguistic control synthesis in a non-learning controller is reported here.
Note: Received November 2, 1973
The Organization of the Living: A Theory of the Living Organization BIBA 149-168
  Humberto R. Maturana
The fundamental feature that characterizes living systems is autonomy, and any account of their organization as systems that can exist as individual unities must show what autonomy is as a phenomenon proper to them, and how it arises in their operation as such unities. Accordingly the following is proposed.
  • That autonomy in living systems is a feature of self-production
       (autopoiesis), and that a living system is properly characterized only as a
       network of processes of production of components that is continuously, and
       recursively, generated and realized as a concrete entity (unity) in the
       physical space, by the interactions of the same components that it produces
       as such a network. This organization I call the autopoietic organization,
       and any system that exhibits it is an autopoietic system in the space in
       which its components exist; in this sense living systems are autopoietic
       systems in the physical space.
  • That the basic consequence of the autopoietic organization is that
       every-thing that takes place in an autopoietic system is subordinated to the
       realization of its autopoiesis, otherwise it disintegrates.
  • That the fundamental feature that characterizes the nervous system is that it
       is a closed network of interacting neurons in which every state of neuronal
       activity generates other states of neuronal activity. Since the nervous
       system is a component subsystem in an autopoietic unity, it operates by
       generating states of relative neuronal activity that participate in the
       realization of the autopoiesis of the organism which it integrates.
  • That the autopoietic states that an organism adopts are determined by its
       structure (the structure of the nervous system included), and that the
       structure of the organism (including its nervous system) is at any instant
       the result of its evolutionary and ontogenic structural coupling with the
       medium in which it is autopoietic, obtained while the autopoiesis is
       realized.
  • That language arises as phenomenon proper to living systems from the
       reciprocal structural coupling of at least two organisms with nervous
       systems, and that self-consciousness arises as an individual phenomenon from
       the recursive structural coupling of an organism with language with its own
       structure through recursive self-description.
    Note: Received October 2, 1974
  • Behavioral Issues in the Use of Interactive Systems BIBA 169-196
      Lance A. Miller; John C., Jr. Thomas
    This paper identifies behavioral issues related to the use of interactive computers primarily by persons who are not computer professionals, so-called "general users". This is not an exhaustive literature survey but instead provides: (1) a structure for discussing issues of interactive computing, and (2) the authors' best estimate of important behavioral problems, with suggestions for solutions. The discussion is limited in this paper to general issues which do not take into account the user's particular task. The two major topics are System Characteristics (performance, facilities and on-line information), and Interface Characteristics (dialogue style, displays and graphics, other input/output media).
    Note: Received March 10, 1977
    Towards a Theory of the Cognitive Processes in Computer Programming BIBA 197-211
      Ruven Brooks
    While only in the past ten years have large numbers of people been engaged in computer programming, a small body of studies on this activity have already been accumulated. These studies are, however, largely atheoretical. The work described here has as its goal the creation of an information processing theory sufficient to describe the findings of these studies. The theory postulates understanding, method-finding, and coding processes in writing programs, and presents an explicit model for the coding process.
    Note: Received April 5, 1977
    Verbal Reports as Evidence of the Process Operator's Knowledge BIBA 213-238
      Lisanne Bainbridge
    Verbal reports are usually collected with the aim of understanding mental behaviour. As it is not possible to observe mental behaviour directly we cannot test for a correlation between report and behaviour, and cannot assume one. Verbal data cannot therefore be used to test theories of mental behaviour. Verbal data may be produced by a separate report generating process which may give a distorted account. The data can be useful for practical purposes if these distortions are minimal. This paper attempts to assess the conditions in which this is the case. Several methods of obtaining verbal reports are surveyed: system state/action state diagram, questionnaire, interview, static simulation and verbal protocol. Techniques for collecting and analysing the data are described. In each case the small amount of data available on the correlation between reports and observed behaviour are reviewed. The results are not clear. Some verbal data are evidently misleading. Others, however, are sufficiently good to encourage the search for more information about factors affecting their validity.
    Knowledge Acquisition by Encoding Expert Rules Versus Computer Induction from Examples: A Case Study Involving Soybean Pathology BIBA 239-263
      R. S. Michalski; R. L. Chilausky
    In view of growing interest in the development of knowledge-based computer consulting systems for various problem domains, the problems of knowledge acquisition have special significance. Current methods of knowledge acquisition rely entirely on the direct representation of knowledge of experts, which usually is a very time and effort consuming task. The paper presents results from an experiment to compare the above method of knowledge acquisition with a method based on inductive learning from examples. The comparison was done in the context of developing rules for soybean disease diagnosis and has demonstrated an advantage of the inductively derived rules in performing a testing task (which involved diagnosing a few hundred cases of soybean diseases).
    Note: Received June 15, 1979
    The Black Box Inside the Glass Box: Presenting Computing Concepts to Novices BIBA 265-277
      Benedict du Boulay; Tim O'Shea; John Monk
    Simplicity and visibility are two important characteristics of programming languages for novices. Novices start programming with very little idea of the properties of the notional machine implied by the language they are learning. To help them learn these properties, the notional machine should be simple. That is, it should consist of a small number of parts that interact in ways that can be easily understood, possibly by analogy to other mechanisms with which the novice is more familiar. A notional machine is the idealized model of the computer implied by the constructs of the programming language. Visibility is concerned with methods for viewing selected parts and processes of this notional machine in action. We introduce the term "commentary" which is the system's dynamic characterization of the notional machine, expressed in either text or pictures on the user's terminal. We examine the simplicity and visibility of three systems, each designed to provide programming experience to different populations of novices.
    The ZOG Approach to Man-Machine Communication BIBA 279-306
      G. Robertson; D. McCracken; A. Newell
    ZOG is a rapid response, large network, menu selection system used for man-machine communication. The philosophy behind this style of communication was first developed by the PROMIS (Problem Oriented Medical Information System) Laboratory of the University of Vermont. ZOG has been used in a number of task domains to help explore the limits and potential benefits of the communication philosophy. This paper discusses the basic ideas in ZOG, describes the architecture of a system implemented to carry out that exploration, and discusses our initial experience.
    Note: Received August 6, 1980
    Why Interactive Computer Systems are Sometimes Not Used by People Who Might Benefit from Them BIBA 307-321
      Raymond S. Nickerson
    Several reasons are considered why some people who might benefit from using computer systems do not use them. The discussion is organized around examples of several classes of complaints that abstainers and dissatisfied users have been known to make regarding various aspects of the design and operation of specific computer-based systems.
    Note: Received February 10, 1981
    Users are Individuals: Individualizing User Models BIBA 323-338
      Elaine Rich
    It has long been recognized that in order to build a good system in which a person and a machine cooperate to perform a task it is important to take into account some significant characteristics of people. These characteristics are used to build some kind of a "user model". Traditionally, the model that is built is a model of a canonical (or typical) user. But often individual users vary so much that a model of a canonical user is insufficient. Instead, models of individual users are necessary. This article presents some examples of situations in which individual user models are important. It also presents some techniques that make the construction and use of such models possible. These techniques all reflect a desire to place most of the burden of constructing the models on the system, rather than on the user. This leads to the development of models that are collections of good guesses about the user. Thus some kind of probabilistic reasoning is necessary. And as the models are being used to guide the underlying system, they must also be monitored and updated as suggested by the interactions between the user and the system. The performance of one system that uses some of these techniques is discussed.
    Note: Received May 13, 1981
    Cognitive Systems Engineering: New Wine in New Bottles BIBA 339-356
      Erik Hollnagel; David D. Woods
    This paper presents an approach to the description and analysis of complex Man-Machine Systems (MMSs) called Cognitive Systems Engineering (CSE). In contrast to traditional approaches to the study of man-machine systems which mainly operate on the physical and physiological level, CSE operates on the level of cognitive functions. Instead of viewing an MMS as decomposable by mechanistic principles, CSE introduces the concept of a cognitive system: an adaptive system which functions using knowledge about itself and the environment in the planning and modification of actions. Operators are generally acknowledged to use a model of the system (machine) with which they work. Similarly, the machine has an image of the operator. The designer of an MMS must recognize this, and strive to obtain a match between the machine's image and the user characteristics on a cognitive level, rather than just on the level of physical functions. This article gives a presentation of what cognitive systems are, and of how CSE can contribute to the design of an MMS, from cognitive task analysis to final evaluation.
    Note: Received March 26, 1982
    Deep Versus Compiled Knowledge Approaches to Diagnostic Problem-Solving BIBA 357-368
      B. Chandrasekaran; Sanjay Mittal
    Most of the current generation expert systems use knowledge which does not represent a deep understanding of the domain, but is instead a collection of "pattern action" rules, which correspond to the problem-solving heuristics of the expert in the domain. There has thus been some debate in the field about the need for and role of "deep" knowledge in the design of expert systems. It is often argued that this underlying deep knowledge will enable an expert system to solve hard problems. In this paper we consider diagnostic expert systems and argue that given a body of underlying knowledge that is relevant to diagnostic reasoning in a medical domain, it is possible to create a diagnostic problem-solving structure which has all the aspects of the underlying knowledge needed for diagnostic reasoning "compiled" into it. It is argued this compiled structure can solve all the diagnostic problems in its scope efficiently, without any need to access the underlying structures. We illustrate such a diagnostic structure by reference to our medical system MDX. We also analyze the use of these knowledge structures in providing explanations of diagnostic reasoning.
    Rough Classification BIBA 369-383
      Zdzislaw Pawlak
    This article contains a new concept of approximate analysis of data, based on the idea of a "rough" set. The notion of approximate (rough) description of a set is introduced and investigated. The application to medical data analysis is shown as an example.
    Note: Received May 6, 1983
    Metaphor, Computing Systems, and Active Learning BIBA 385-403
      John M. Carroll; Robert L. Mack
    Recent discussion has resolved the question of how prior knowledge organizes new learning into the technical definition and study of "metaphor". Some theorists have adopted an "operational" approach, focusing on the manifest effects of suggesting metaphoric comparisons to learners. Some have resolved the question formally into a "structural" definition of metaphor. However, structural and operation approaches typically ignore the goal-directed learner-initiated learning process through which metaphors become relevant and effective in learning. Taking this process seriously affords an analysis of metaphor that explains why metaphors are intrinsically open-ended and how their open-endedness stimulates the construction of mental models.
    Note: Received January 17, 1984
    An Approach to the Formal Analysis of User Complexity BIBA 405-434
      David Kieras; Peter G. Polson
    A formal approach to analysing the user complexity of interactive systems or devices is described, based on theoretical results from cognitive psychology. The user's knowledge of how to use a system to accomplish the various tasks is represented in a procedural notation that permits quantification of the amount and complexity of the knowledge required and the cognitive processing load involved in using a system. Making a system more usable can be accomplished by altering its design until the knowledge is adequately simplified. By representing the device behaviour formally as well, it is possible to simulate the user-device interaction to obtain rigorous measures of user complexity.
    Note: Received February 5, 1983
    The User's Mental Model of an Information Retrieval System: An Experiment on a Prototype Online Catalog BIBA 435-452
      Christine L. Borgman
    An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a database of bibliographic records. The research was based on the mental models theory which proposes that people can be trained to develop a 'mental model' or a qualitative simulation of a system which will aid in generating methods for interacting with the system, debugging errors, and keeping track of one's place in the system. It follows that conceptual training based on a system model will be superior to procedural training based on the mechanics of the system. We performed a laboratory experiment with two training conditions (model and procedural), and with each condition split by sex. Forty-three subjects participated in the experiment, but only 32 were able to reach the minimum competency level required to complete the experiment. The data analysis incorporated time-stamped monitoring data, personal characteristics variables, affective variables, and interview data in which subjects described how they thought the system worked (an articulation of the model). As predicted, the model-based training had no effect on the ability to perform simple, procedural tasks, but subjects trained with a model performed better on complex tasks that required extrapolation from the basic operations of the system. A stochastic process analysis of search-state transitions reinforced this conclusion. Subjects had difficulty articulating a model of the system, and we found no differences in articulation by condition. The high number of subjects (26%) who were unable to pass the benchmark test indicates that the retrieval tasks were inherently difficult. More interestingly, those who dropped out were significantly more likely to be humanities or social science majors than science or engineering majors, suggesting important individual differences and equity issues. The sex-related differences were slight, although significant, and suggest future research questions.
    Note: Received March 6, 1985
    Expertise Transfer and Complex Problems: Using AQUINAS as a Knowledge-Acquisition Workbench for Knowledge-Based Systems BIBA 453-478
      John H. Boose; Jeffrey M. Bradshaw
    Acquiring knowledge from a human expert is a major problem when building a knowledge-based system. Aquinas, an expanded version of the Expertise Transfer System (ETS), is a knowledge-acquisition workbench that combines ideas from psychology and knowledge-based systems research to support knowledge-acquisition tasks. These tasks include eliciting distinctions, decomposing problems, combining uncertain information, incremental testing, integration of data types, automatic expansion and refinement of the knowledge base, use of multiple sources of knowledge and providing process guidance. Aquinas interviews experts and helps them analyse, test, and refine the knowledge base. Expertise from multiple experts or other knowledge sources can be represented and used separately or combined. Results from user consultations are derived from information propagated through hierarchies. Aquinas delivers knowledge by creating knowledge bases for several different expert-system shells. Help is given to the expert by a dialog manager that embodies knowledge-acquisition heuristics.
       Aquinas contains many techniques and tools for knowledge acquisition; the techniques combine to make it a powerful testbed for rapidly prototyping portions of many kinds of complex knowledge-based systems.
    Use of a Domain Model to Drive an Interactive Knowledge-Editing Tool BIBA 479-495
      Mark A. Musen; Lawrence M. Fagan; David M. Combs; Edward H. Shortliffe
    The manner in which a knowledge-acquisition tool displays the contents of a knowledge base affects the way users interact with the system. Previous tools have incorporated semantics that allow knowledge to be edited in terms of either the structural representation of the knowledge or the problem-solving method in which that knowledge is ultimately used. A more effective paradigm may be to use the semantics of the application domain itself to govern access to an expert system's knowledge base. This approach has been explored in a program called OPAL, which allows medical specialists working alone to enter and review cancer treatment plans for use by an expert system called ONCOCIN. Knowledge-acquisition tools based on strong domain models should be useful in application areas whose structure is well understood and for which there is a need for repetitive knowledge entry.
    Simplifying Decision Trees BIBA 497-510
      J. R. Quinlan
    Many systems have been developed for constructing decision trees from collections of examples. Although the decision trees generated by these methods are accurate and efficient, they often suffer the disadvantage of excessive complexity and are therefore incomprehensible to experts. It is questionable whether opaque structures of this kind can be described as knowledge, no matter how well they function. This paper discusses techniques for simplifying decision trees while retaining their accuracy. Four methods are described, illustrated, and compared on a test-bed of decision trees from a variety of domains.

    IJHCS 1999 Volume 51 Issue 3

    Editorial: Organizational Memory and Knowledge Management BIB 511-516
      Stefan Decker; Frank Maurer
    Organizational Aspects of Knowledge Lifecycle Management in Manufacturing BIBA 517-547
      C. E. Siemieniuch; M. A. Sinclair
    This paper takes as its starting point that knowledge is not a unitary thing, and that in a competitive environment it has a lifecycle. In other words, if a company is to remain competitive, it must address the issues of new knowledge generation, its propagation across the organization, and its subsequent retirement. Some examples from manufacturing industry are outlined. The paper then discusses some classifications of knowledge, points out some management issues and then discusses what appears to be emerging "best practice" in this field. Implications for organizational configurations are then outlined. Finally, a list of outstanding issues is given. This discussion is based on interviews and findings from a number of collaborative projects in the European automotive industry over the past decade.
    Knowledge Management Techniques: Teaching and Dissemination Concepts BIBA 549-566
      Ann MacIntosh; Ian Filby; John Kingston
    This paper describes knowledge management teaching and dissemination concepts to support the training of professionals in an organization to manage their knowledge assets. They are based on AIAI's experience of working with large organizations to establish a technical knowledge management framework and to support their personnel in implementing the framework.
       The concepts support organizations who embark on a knowledge management programme. They promote the importance of knowledge management and the awareness of how knowledge management can be accomplished within, and across, operational divisions: create an awareness of a framework to achieve knowledge management; and establish a group of personnel who have skills in knowledge management techniques to enable them to facilitate the development, maintenance, use and sharing of the organization's knowledge assets.
       The main objective is to ensure that knowledge management techniques are rolled out across the organization. Importantly, these concepts provide the organization with the necessary training in the use of techniques to identify, analyse and manage knowledge assets.
    Methods and Tools for Corporate Knowledge Management BIBA 567-598
      Rose Dieng; Olivier Corby; Alain Giboin; Myriam Ribiere
    This article is a survey of some methods, techniques and tools aimed at managing corporate knowledge from a corporate memory designer's perspective. In particular, it analyses problems and solutions related to the following steps: detection of needs of corporate memory, construction of the corporate memory, its diffusion (specially using the Internet technologies), use, evaluation and evolution
    When Email Meets Organizational Memories: Addressing Threats to Communication in a Learning Organization BIBA 599-614
      David G. Schwartz
    The communicative act in a learning organization is subject to a number of threats to its validity (Habermas, 1981), in particular the comprehensibility, truth, trustworthiness and appropriateness of a given message. Organizational memories (OMs) can be used to address these threats. Our focus is on email communication, which suffers from the same threats identified by Habermas. The integration of email with on OM can improve the quality of communication by applying meta-knowledge to appropriately link a given message to the OM. In this paper, we expand upon the direction taken by earlier work of Abecker et al. (1997) with respect to the importance of the object-meta relationship and the use of meta-knowledge to manage (or rather to complete) an OM. We suggest that the focus of the meta-knowledge in an email application, should be on the roles, perspectives, and characteristics of the people in an organization rather than on knowledge description. This, we argue, will effectively ensure that knowledge will not be disassociated from the people and the situation (Sierhuis & Clancey, 1997). We present the HyperMail architecture and sample application to illustrate how formal meta-knowledge is used to re-associate informal email communications to an OM.
    Towards a Knowledge Technology for Knowledge Management BIBA 615-641
      Nick Milton; Nigel Shadbolt; Hugh Cottam; Mark Hammersley
    Knowledge Management (KM) is crucial to organizational survival, yet is a difficult task requiring large expenditure of resources. Information Technology solutions, such as email, document management and intranets, are proving very useful in certain areas. However, many important problems still exist, providing opportunities for new techniques and tools more oriented towards knowledge. We refer to this as Knowledge Technology. A framework has been developed which has allowed opportunities for Knowledge Technology to be identified in support of five key KM activities: personalization, creation/innovation, codification, discovery and capture/monitor. In developing Knowledge Technology for these areas, methods from knowledge engineering are being explored. Our main work in this area has involved the application and evaluation of existing knowledge for a large intranet system. This, and other case studies, have provided important lessons and insights which have led to ongoing research in ontologies, generic models and process modelling methods. We believe that the evidence presented here shows that knowledge engineering has much to offer KM and can be the basis on which to move towards a Knowledge Technology.
    Organizational Learning and Experience Documentation in Industrial Software Projects BIBA 643-661
      Dieter Landes; Kurt Schneider; Frank Houdek
    Learning from experiences in the software domain is an important issue for the DaimlerChrysler Corporation. Unfortunately, there are no textbook recipes on how a process of organizational learning can be established. In particular, those types of experiences must be identified that are potentially valuable for reuse. Furthermore, the organization and representation of such experiences must be defined in such a way that they can easily be retrieved and used for the solving of new problems. In this paper, we provide some insights that we gained during the examination of these issues in projects aiming at establishing a so-called experience factory.
    A Virtual Library for Building Community and Sharing Knowledge BIBA 663-685
      Scott Robertson; Kathy Reese
    Libraries are hubs for social and intellectual interactions in communities and organizations. Virtual libraries should serve the same purpose, yet virtual libraries often focus simply on making their holdings available. In this article an on-line corporate library is described that places knowledge sharing and community building at the core of its design. The library system supports personal websites that are visible to the entire organization. Personal topic profiles for library research services, information services choice and collaborative research requests provide employees with views of each others' activities and interests. In particular, information about research questions being asked across all parts of the organization provides a unique window on the company's goals and activities. Collaboration and interest-matching tools help employees to share knowledge across the organization and to form special interest communities.
    (KA)2: Building Ontologies for the Internet: A Mid-Term Report BIBA 687-712
      V. Richard Benjamins; Dieter Fensel; Stefan Decker; Asuncion Gomez Perez
    Ontologies are becoming increasingly more important in many different areas, including the knowledge management area. In knowledge management, ontologies can be used as an instrument to make knowledge assets intelligently accessible to people in organizations through an Intranet or the Internet. Most enterprises agree that knowledge is an essential asset for success and survival on an increasingly competitive and global market. In this paper, we present an ontology-based approach through a large-scale initiative involving knowledge management for the knowledge-acquisition research community.

    IJHCS 1999 Volume 51 Issue 4

    Editorial: Evaluating Knowledge Engineering Techniques BIB 715-727
      Tim Menzies; Frank Van Harmelen
    The Experimental Evaluation of Knowledge Acquisition Techniques and Methods: History, Problems and New Directions BIBA 729-755
      Nigel Shadbolt; Kieron O'Hara; Louise Crow
    The special problems of experimentally evaluating knowledge acquisition and knowledge engineering tools, techniques and methods are outlined, and illustrated in detail with reference to two series of studies. The first is a series of experiments undertaken at Nottingham University under the aegis of the UK Alvey initiative and the ESPRIT project ACKnowledge. The second is the series of Sisyphus benchmark studies. A suggested programme of experimental evaluation is outlined which is informed by the problems with using Sisyphus for evaluation.
    Knowledge-Based Systems' Validation: When to Stop Running Test Cases BIBA 757-781
      Juan P. Caraca-Valente; Jose L. Morant; Luis Gonzalez; J. Pazos
    One of the unsettled problems in knowledge engineering and, particularly, in the field of validation is to determine when the validation process of a knowledge-based system is complete. Some hints are given in the literature, but little, if any, work has been done on how to address this problem from an analytical viewpoint. In this paper, the validation field is briefly surveyed, and we take a look at what the validation process should consist of. Then, an analytical model is proposed for obtaining the optimum number of test cases for validating the knowledge-based system under study. This mathematical model takes into account the v-type (knowledge-based system classed according to the validation process) defined in this paper, the expected degree of confidence in the validation process and previous results from similar knowledge-based system validation processes. It should be mentioned here that a theoretical result is given to solve eminently practical problem, a rare occurrence in the field of artificial intelligence as a whole and especially in knowledge engineering.
    Critical Success Metrics: Evaluation at the Business Level BIBA 783-799
      Tim Menzies
    If we lack an objective human expert oracle who can assess a system, and if we lack a library of known or desired behavior, how can we assess an expert system? One method for doing so is a critical success metric (CSM). A CSM is an assessment of a running program which reflects the business concerns that prompted the creation of that program. Given Pre-disaster knowledge, a CSM can be used while the expert system is in routine use, without compromising the operation of the system. A general CSM experiment is defined using pre-disaster points which can compare (e.g.) human to expert system performance. Examples of using CSMs are given from the domains of farm management and process control.
    Empirical Evaluation of a Domain-Oriented Component Library Based on an Embedded Case Study Design BIBA 801-823
      Masahiro Hori
    This paper presents an empirical evaluation of the real-life reuse of a domain-oriented library of problem-solving methods, in which top-level methods are related with respect to a domain model in an application area. First, the author clarifies research questions and the assumptions underlying this study, in accordance with a framework of case study research design. The organization of the library and its deployment processes are then briefly introduced. Taking account of the recurring modification of the library, the stability and productivity are investigated with respect to the code-size changes and the number of person-hours spent on the maintenance. Finally, it is concluded that the stability of a domain model is ensured by module dependency, and facilitates productive reuse of the domain-oriented library.
    Evaluating PSMs in Evolutionary Design: The A UTOGNOSTIC Experiments BIBA 825-847
      Eleni Stroulia; Ashok K. Goel
    The specification of generic Problem-Solving Methods has been a fertile research area. A lot of work has been devoted to developing languages for describing PSMs, identifying PSMs, and using their specifications for requirements capture, design and development of knowledge based systems. In our work, we have been investigating another potential use for PSMs, namely, supporting the redesign of systems that fail to exhibit the behaviours desired of them, that is, behaviours similar to, but slightly different from, the ones they were originally designed to exhibit. To this end, we have defined a PSM modeling language and a failure-driven redesign process based on this language, both of which were implemented in the A UTOGNOSTIC system. In this paper, we report on a sequence of experiments performed with A UTOGNOSTIC. Some of them were exploratory and their goal was to enable the precise characterization of issues relating to the problem of system redesign, while others were designed to evaluate the PSM language and the redesign process implemented in A UTOGNOSTIC.

    IJHCS 1999 Volume 51 Issue 5

    The Relationship between User Query Accuracy and Lines of Code BIBA 851-864
      Hock C. Chan
    In experimental studies on query languages, subjects are required to write queries using different query languages. User query performance is usually measured by query accuracy. There is no clearly defined objective method of applying findings to other queries. This study examines the suitability of using a software metric based on lines of code to estimate user query accuracy. Lines of code have been measured in various ways, such as physical source code lines, logical source code lines or compiled bytes. A method of counting lines of code for database queries is proposed and applied to two query languages. The new method counts Boolean conditions as well as other statements. The relationship between lines of code and user query accuracy was examined with regression models. The results show that lines of code can explain a high percentage of the variance in accuracy, with R2>0.8 for the standard relational model query language SQL, and R2>0.9 for the entity relationship model query language KQL. The common assumption that more lines of code will lead to lower accuracy is only partly validated. The findings show a nonlinear relationship, with a possible recovery in accuracy for queries with many lines of code. The results indicate that lines of code can be usefully applied in the study of query languages.
    Case-Based Design Browser to Support Software Reuse: Theoretical Structure and Empirical Evaluation BIBA 865-893
      Jennifer J. Ockerman; Christine M. Mitchell
    With the proliferation of large, complex software systems, reuse of previous software designs and software artifacts, such as operation concepts, requirements, specifications and source code, is an important issue for both industry and government. Reuse has long been expected to result in substantial productivity and quality gains. To date, this expectation has been largely unmet. One reason may be the lack of tools to support software reuse.
       This research proposes the development of one such tool, the Design Browser. The Design Browser is a software architecture intended to support designers of large software systems in the early stages of software design, specifically conceptual design. The Design Browser is based on principles derived from cognitive engineering (e.g. Woods & Roth, 1988a); naturalistic decision-making, particularly Klein's (1989) recognition-primed decision making model; and Kolodner's (1993) approach to case-based reasoning.
       As a proof-of-concept demonstration, the Design Browser was implemented for a NASA satellite control sub-system-the command management system (CMS). An empirical evaluation was conducted. It used the CMS Design Browser and participants who were part of the three user groups often involved in large-scale commercial software development. These groups are the software design team, the users and management. The results of the evaluation show that all three groups found the CMS Design Browser quite useful as demonstrated by actual performance and subjective rating.
    Three Important Determinants of User Performance for Database Retrieval BIBA 895-918
      Hock C. Chan; Bernard C. Y. Tan; Kwok-Kee Wei
    Three important factors that determine user performance during database retrieval are representation realism, expressive ease, and task complexity. Representation realism is the level of abstraction used when formulating queries. Expressive ease is the syntactic flexibility permitted when formulating queries. Task complexity is the level of difficulty of queries. A controlled laboratory experiment was conducted to assess the effects of these three factors on user productivity during database retrieval. The independent variables were representation realism (high versus low), expressive ease (high versus low), and query complexity (simple versus complex). The dependent variables were query accuracy and query time. Results show that all these three factors significantly affected user performance during database retrieval. However, their relative impact on query accuracy and query time differed. Moreover, these factors interacted in unique ways to moderate query accuracy and query time. Besides verifying prior empirical findings, these results offer several suggestions for future research and development work in the area of database retrieval.
    A Framework for Navigation BIBA 919-945
      Robert Spence
    A new schematic framework for navigation is presented which is relevant to physical, abstract and social environments. Navigation is defined as the creation and interpretation of an internal (mental) model, and its component activities are browsing, modelling, interpretation and the formulation of browsing strategy. The design of externalizations and interactions to support these activities, and navigation as a whole, is discussed.
    Why Machines Should Analyse Intention in Natural Language Dialogue BIBA 947-989
      Paul Mckevitt; Derek Partridge; Yorick Wilks
    One of the most difficult problems in Artificial Intelligence (AI) is to construct a natural language processing system which can interact with users through a natural language dialogue. The problem is difficult because there are so many ways by which a user can phrase his/her utterances to such a system. An added problem is that different types of users have different types of intentions and will conduct different exchanges with the system. While many have proposed theories and models of the processing of intentions in dialogue, few of these have been incorporated within working systems and tested empirically. Here, an experiment is conducted to test what we call the Intention-Computer Hypothesis: that the analysis of intention in natural-language dialogue facilitates effective natural-language dialogue between different types of people and a computer. The experiment provides evidence to support the hypothesis. In turn, the hypothesis provides evidence for a theory of intention analysis for natural-language dialogue processing. A central principle of the theory is that coherence of natural-language dialogue can be modelled by analysing sequences of intention. A computational model, called Operating System CONsultant (OSCON), implemented in Quintus Prolog, makes use of the theory and hypothesis to understand, and answer in English, English questions about computer operating system.
    Does Automation Bias Decision-Making? BIBA 991-1006
      Linda J. Skitka; Kathleen L. Mosier; Mark Burdick
    Computerized system monitors and decision aids are increasingly common additions to critical decision-making contexts such as intensive care units, nuclear power plants and aircraft cockpits. These aids are introduced with the ubiquitous goal of "reducing human error". The present study compared error rates in a simulated flight task with and without a computer that monitored system states and made decision recommendations. Participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task. Participants with an aid made errors of omission (missed events when not explicitly prompted about them by the aid) and commission (did what an automated aid recommended, even when it contradicted their training and other 100% valid and available indicators). Possible causes and consequences of automation bias are discussed.
    A System Architecture for Knowledge-Based Hypermedia BIBA 1007-1036
      Anneli Edman; Andreas Hamfelt
    Hypermedia systems and knowledge systems can be viewed as flip-sides of the same coin. The former are designed to convey information and the latter to solve problems; developments beyond the basic techniques of each system type requires techniques from the other type. In this paper, we introduce the concept of knowledge-based or intelligent hypermedia and analyse various constellations of merged hypermedia and knowledge systems. A hypermedia system deals with informal and formalized theories and the relations between and within these. Therefore, the corner stones of our analysis are the very basic notions involved in formalizing domain knowledge: an informal domain theory, a formal object theory axiomatizing the informal theory and a metatheory analysing the properties and interrelations between and within these. We integrate these notions into a system architecture which is to serve as a programmable system schema for supporting the composition of actual intelligent hypermedia systems. Programming in the large is supported by the schema which defines the overall system structure whereas programming in the small is supported by knowledge modelling techniques. The application of the system architecture is illustrated by the construction of an interactive diagnosis system which involves knowledge-based reasoning both for navigation in hyperspace and problem-solving within the domain.

    IJHCS 1999 Volume 51 Issue 6

    Editorial: Model-Based Legal Knowledge Engineering BIB 1037-1042
      Nienke Den Haan; Giovanni Sartor
    The Law as a Dynamic Interconnected System of States of Affairs: A Legal Top Ontology BIBA 1043-1077
      Jaap Hage; Bart Verheij
    In this paper, an abstract model of the law is presented that has three primitives: states of affairs, events and rules. The starting point of the abstract model is that the law is a dynamic system of states of affairs which are connected by means of rules and events. The abstract model can be regarded as a top ontology of the law, that can be applied to legal knowledge representation. After an elaboration of the three primitives, the uses of the abstract model are illustrated by the analysis of central topics of law. Then we discuss heuristic guidelines for legal knowledge representation that are suggested by the abstract model. The paper concludes with a comparison with related work. The appendix contains a formalism for the abstract model.
    Legal Modeling and Automated Reasoning with ON-LINE BIBA 1079-1125
      Andre Valente; Joost Breuker; Bob Brouwer
    In this paper we present a modeling approach to legal knowledge systems and its computational realization in the ON-LINE architecture. ON-LINE has modules for modeling legal sources, for storing and retrieving legal information and for reasoning with legal knowledge. The approach takes two perspectives: domain and task. In the domain perspective, a core ontology divides legal knowledge into five major categories: normative, world, responsibility, reactive and creative. For the normative knowledge, which is most typical of legal domains, we developed a new representation and inference formalisms which are an alternative to deontic logic. For the world knowledge, we argue for using a terminological knowledge representation language. The structure of the ontology is not a taxonomy, but a network of dependencies between the categories. These dependencies reflect the global structure of arguments in legal reasoning. In the task perspective, we followed a top-down approach using the CommonKADS modeling library. Design, planning and assessment were identified as typical tasks in the legal domain. For assessment, a model was specified and implemented.
    A Principled Approach to Developing Legal Knowledge Systems BIBA 1127-1154
      Robert W. Van Kralingen; Pepijn R. S. Visser; Trevor J. M. Bench-Capon; H. Jaap Van Den Herik
    In this article we present a principled, four-phased approach to the development of legal knowledge systems. We set out from the well-studied CommonKADS method for the development of knowledge systems and tailor this method to the legal domain. In particular, we propose a generic legal ontology, and describe the creation of statute-specific ontologies to adopt the method for building legal systems. In the construction of these ontologies, we start from a theoretical analysis of the legal domain. The well-known example of the Imperial College Library Regulations (ICLR) is used to illustrate the method.
    Information Extraction from Legal Texts: The Potential of Discourse Analysis BIBA 1155-1171
      Marie-Francine Moens; Caroline Uyttendaele; Jos Dumortier
    There is an urgent need to automatically identify information in legal texts. In this paper, we argue that discourse analysis yields valuable knowledge to be incorporated in text processing systems. Knowledge about discourse patterns has already been applied in legal text generation systems. But, it is equally important to incorporate this kind of knowledge in legal information extraction systems. This knowledge is helpful for locating information in texts. Also, we demonstrate the need for adequate, maintainable, and possibly sharable knowledge representations of discourse patterns. The findings are illustrated by explicating the role discourse analysis played when building the SALOMON system, a system that automatically abstracts Belgian criminal cases.
    Modelling Rhetorical Legal "Logic" -- A Double Syllogism BIBA 1173-1188
      John S. Edwards; Robert I. Akroyd
    This paper looks at legal reasoning from the point of view of the work of the lawyer, rather than the law itself. In the case of Common Law systems, this means a more flexible view of how tasks are divided between the humans and the computer system, with an emphasis on decision support rather than complete automation. A process-based model of the lawyer's work is proposed in the form of a double syllogism, which displays an aesthetically pleasing symmetry, but also a significant asymmetry in the role played by perceived precedents. This arises from the use of inductive, rather than deductive, reasoning. The potential complications arising from the issue of the perception of precedents are discussed in depth.
       The double-syllogism model is then considered in the light of CommonKADS terminology and models. It is suggested that decision support systems using knowledge-based techniques, as required to support lawyers working under Common Law jurisdiction, raise a stronger form of the interaction problem that is well known in knowledge-based systems. This means that such systems are not well catered for in the existing CommonKADS Organisational, Agent, Task and Communication Models. The double-syllogism model is suggested as a supplement to CommonKADS in the development of such systems, at least until a more generic addition is available.