HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 40414243444546474849505152

International Journal of Human-Computer Studies 42

Editors:B. R. Gaines
Dates:1995
Volume:42
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:32
Links:Table of Contents
  1. IJHCS 1995 Volume 42 Issue 1
  2. IJHCS 1995 Volume 42 Issue 2
  3. IJHCS 1995 Volume 42 Issue 3
  4. IJHCS 1995 Volume 42 Issue 4
  5. IJHCS 1995 Volume 42 Issue 5
  6. IJHCS 1995 Volume 42 Issue 6

IJHCS 1995 Volume 42 Issue 1

Editorial BIB 1-2
  Brian Gaines
Measuring the Value of Knowledge BIBA 3-30
  Yoram Reich
The quality of knowledge that a system has substantially influences its performance. Often, the terms "knowledge", its "quality" and how it is "measured" or "valuated", are left vague enough to accommodate several ad hoc interpretations. This paper articulates two definitions of knowledge and their associated value measures. The paper focuses on the theory underlying measurements and its application to knowledge valuation; it stresses the issue of constructing meaningful measures rather than discussing some of the desirable properties of measures (e.g. reliability or validity). A detailed example of knowledge valuation using the measures is described. The example demonstrates the importance for system understanding and the difficulty of valuating knowledge. It shows the importance of employing several different measures simultaneously for a single valuation. The paper concludes by discussing the scope of and relationships between the measures.
The PROCOPE Semantic Network: An Alternative to Action Grammars BIBA 31-69
  Sebastien Poitrenaud
Formalisms for the description of procedural knowledge, such as action grammars and production systems, do not allow for direct handling of the semantic of objects which are involved in the actions. As models focusing on rules, objects only appear through rule triggering conditions, and hence preclude opportunities to make the overall semantic structure of the task world explicit.
   After a critical review of action grammars and their semantic extensions, the PROCOPE formalism is presented as an alternative way to describe know-how focusing on objects. Goals and procedures to reach goals are treated as properties of objects, just as structural properties; i.e. they are the functional properties of objects. Handled in this way, goals and procedures, because they categorize objects, are used to generate the class inclusion semantic network which is the core of the PROCOPE description.
   A major advantage of PROCOPE over rule-based systems is its ability to express the part of cognitive complexity which is due, not to the number of procedures, but to the complexity of the overall structure generated by the way the objects involved in the actions are sharing those procedures.
   Finally, we present the PROCOPE software and we show how it has been put to practical use.
Tragic Loss or Good Riddance? The Impending Demise of Traditional Scholarly Journals BIBA 71-122
  Andrew M. Odlyzko
Scholarly publishing is on the verge of a drastic change from print journals to electronic ones. Although this change has been predicted for a long time, trends in technology and growth in the literature are making this transition inevitable. It is likely to occur in a few years, and is likely to be sudden. This article surveys the pressures that are leading to the impending change, and makes predictions about the future of journals, publishers, and libraries. The new electronic publishing methods are likely to improve greatly scholarly communication, partially through more rapid publication, but also through wider dissemination and a variety of novel features that cannot be implemented with the present print system, such as references in a paper to later papers that cite it.
Bulletin BIB 123-135
 

IJHCS 1995 Volume 42 Issue 2

SHAPE: A Machine Learning System from Examples BIBA 137-155
  Francisco Botana; Antonio Bahamonde
This paper presents a new machine learning system called SHAPE. The input data are vectors of properties (represented as attribute-value pairs) which are used to describe individual cases, examples or observations in a given world. Each case belongs to exactly one of a set of classes, and the aim is to produce a collection of decision rules concluding the class according to the properties observed.
   SHAPE follows three steps. First, we build up an acyclic graph capturing dependencies among the properties involved. Since we endow this net with a semantic interpretation, we are allowed to read the net as a first draft of classification rules. Secondly, we will rewrite these rules to compact their syntactic description using automata minimization techniques. Finally, the rules are generalized in order to obtain the definite intensional description of the thus learned concepts.
   Let us remark that the last two stages could be applied to a set of rules attached to a collection of examples coming from any other learning system.
   To close the paper, we also present different experiments made with SHAPE to illustrate the performance of the system in a wide range of applications.
Fuzzy Cognitive Maps Considering Time Relationships BIBA 157-168
  Kyung Sam Park; Soung Hie Kim
Causal knowledge is often cyclic and fuzzy, thus it is hard to represent in the form of trees. A fuzzy cognitive map (FCM) can represent causal knowledge as a signed directed graph with feedback. It provides an intuitive framework in which to form decision problems as perceived by decision makers and to incorporate the knowledge of experts. This paper proposes a fuzzy time cognitive map (FTCM), which is a FCM introduced to a time relationship on arrows. We first discuss the characteristics and basic assumptions of the FCM, and present a description of causal propagations in a FCM with the causalities of negative-positive-neutral interval, [-1,1]. We develop a method of translating the FTCM, that has a different time lag, into the FTCM that has one or the same unit-time lag, which is a value-preserving translation. With the FTCM, we illustrate analysing the change of causalities among factors according to lapse of time.
Comparing Telephone-Computer Interface Designs: Are Software Simulations as Good as Hardware Prototypes? BIBA 169-184
  N. P. Archer; Y. Yuan
Widespread interest in the evaluation of human-system interfaces has led to the development of various techniques in usability engineering. Usability evaluations are usually carried out on interface prototypes. However, if the design involves hardware implementation, such as a special keypad or a control panel layout, producing hardware prototypes for evaluation can be expensive and time-consuming. One solution to this problem is to use software tools for design simulation. In this case, a question which must be answered is: will a simulated prototype produce the same conclusions as a hardware prototype? That is, is software simulation a valid approach? The main purpose of this paper is to address this issue through an experiment. A multimedia authoring package was used to simulate several potential telephone handset designs for telephone-computer interfaces. The simulated prototypes were tested and compared with a physical keyboard for validation. The experiment did confirm the validity of simulation in this particular setting. It also demonstrated the advantages of using a software tool to build the prototype and to automate the evaluation process, including user training, test setting, and data collection.
Error-Information in Tutorial Documentation: Supporting Users' Errors to Facilitate Initial Skill Learning BIBA 185-206
  Ard W. Lazonder; Hans van der Meij
Novice users make many errors when they first try to learn how to work with a computer program like a spreadsheet or wordprocessor. No matter how user-friendly the software or the training manual, errors can and will occur. The current view on errors is that they can be helpful or disruptive, depending on the extent to which they are controlled in the learning process. This study examines one of the ways in which such error control can be brought about, namely by investigating the design and role of error-information in a (tutorial) manual. The error-information was designed to support the detection, diagnosis and correction of errors of novice users, and it was based on a general model of error-handling. In an experiment a manual that contained ample error-information was compared to a manual in which there was hardly any error-information. The outcomes showed that the presence of the error-information in the manual helped subjects perform better during practice as well as after practice. Among others, these subjects completed training faster and showed superior corrective knowledge and skill after practice, in addition to having acquired the same level of constructive skill. The discussion addresses the compensating roles of support for error-handling on screen and on paper.
SOL: A Shared Object Toolkit for Cooperative Interfaces BIBA 207-234
  Gareth Smith; Tom Rodden
The paper presents a user interface toolkit to support the construction of cooperative multi-user interfaces. The toolkit is based on the configuration of shared interface objects to construct cooperative interfaces. A principal focus of the toolkit is the provision of accessible facilities to manage interface configuration and tailoring. Most existing facilities to manage multi-user interfaces tend to be application specific and provide only limited tailorability for purpose built cooperative applications.
   In addition, the current structure of most cooperative applications fails to separate the semantics of applications from the cooperation specific semantics. In this paper we present a multi-user interface toolkit that provides management facilities in a manner which separates appropriate features of cooperative use from application semantics. This is achieved by allowing multi-user interfaces to be derived from a common shared interface constructed from shared interface objects. We would suggest that the separation of semantics in this form represents an initial identification of the re-usable cooperative interface components.
Bulletin BIB 235-244
 

IJHCS 1995 Volume 42 Issue 3

A Guessing Measure of Program Comprehension BIBA 245-263
  J. Steve Davis
An effective comprehension measure could be helpful in ranking programs on complexity. Measures involving filling in missing parts of a program can be adapted from the prose domain. For example, cloze tests have been applied to software. We evaluated a new measure based on a fill-in-the-blank exercise conducted by an automated tool. The subject is asked to guess certain characters which are missing from a sample program which is displayed on a computer screen. The value of the measure is derived on an information-theoretic basis from the number of incorrect guesses. This measure has shown promise in experimental evaluations of its ability to measure program comprehension.
Agent Systems that Negotiate and Learn BIBA 265-288
  Siegfried R. Bocionek
Agents for office automation purposes can be seen as an extension of today's office software (word processors, spreadsheets). They should not only support single tasks, but assist their users throughout complex workflow procedures with many persons involved, if possible in much the same way as human secretaries do. For example, such programs might assist a worker in scheduling meetings, flagging important electronic mail, processing purchase orders, etc. This concept seems clearly attractive: it could make secretarial assistance available to everyone within an organization and make such assistance mobile (on notebooks and PDAs).
   This paper focuses on two major features which determine the success of secretarial software. First, the assistance program must be able to negotiate (with other agents as well as with humans), because most office tasks contain interaction among several people. Second, it must be able to learn. It has to learn how to adapt to its users' idiosyncrasies (and not vice versa), since people tend to develop individual work techniques and styles. It also has to learn how to adapt to specific workflows that can differ substantially from organization to organization. To ensure this adaptability we propose -- similar to the way human secretaries are trained -- a learning-by-being-told approach.
   Mechanisms for negotiation and learning have been included in secretarial agents for calendar and room management. In particular, the architecture, functionality, and capabilities of the calendar apprentice CAP II and the room reservation apprentice RAP will be described.
Speech versus Keying in Command and Control Applications BIBA 289-305
  R. I. Damper; S. D. Wood
Experimental comparisons of speech and competitor input media such as keying have, taken overall, produced equivocal results: this has usually been attributed to "task-specific variables". Thus, it seems that there are some good, and some less good, situations for utilization of speech input. One application generally thought to be a success is small-vocabulary, isolated-word recognition for command and control. In a simulated command and control task, Poock purportedly showed a very significant superiority of speech over keying in terms of higher input speeds and lower error rates. This paper argues that the apparent superiority observed results from a methodological error -- specifically that the verbose commands chosen suit the requirements of speech input but make little or no concession to the requirements of keying. We describe experiments modelled on those of Poock, but designed to overcome this putative flaw and to effect a fair comparison of the input media by using terse, abbreviated commands for the keying condition at least. Results of these new experiments reveal that speech input is 10.6% slower (although this difference is not statistically significant) and 360.4% more error-prone than keying, supporting our hypothesis that the methodology of the earlier work was flawed. However, simple extrapolation of our data for terse commands to the situation where keyed commands are entered in full suggests that other differences between our work and Poock's could play a part. Overall, we conclude that a fair comparison of input media requires an experimental design that explicitly attempts to minimize the so-called transaction cycle -- the number of user actions necessary to elicit a system response -- for each medium.
Reflection and Goal Management in Exploratory Learning BIBA 307-339
  Carol-Ina Trudel; Stephen J. Payne
We report two experiments which examine the nature of exploratory learning of interactive devices. We argue that the success of exploratory learning is dependent on the degree to which learners reflect on their interactions, and on how well they manage their goals. In Experiment 1, a keystroke limit was imposed on subjects, i.e. a limit on the amount of physical interaction with the device. In addition some subjects were provided with a list of goals in order to help them manage their exploration. As predicted, the first intervention resulted in more successful learning as compared to the performance of subjects who explored without any constraints. Experimenter-provided goals also yielded some benefits, but had a much less strong effect than the keystroke limit.
   Experiment 2 confirmed and extended the main findings of the first experiment. Further, having noted that subjects preferred to switch opportunistically from goal to goal and mode to mode we predicted that limiting them to exploring one part of the device at a time would result in better learning. We found that imposing a keystroke limit or forcing subjects to explore one mode at a time led to large and significant improvements in exploratory learning. The goal list manipulation also apparently improved exploration, but to a lesser degree than the other two manipulations.
Bulletin BIB 341-351
 

IJHCS 1995 Volume 42 Issue 4

User Errors in Database Query Composition BIBA 353-381
  John B. Smelcer
This research reports on the experimental test of several causes of user errors while composing database queries. The query language under consideration is Structured Query Language (SQL), the industry standard language for querying databases. Unfortunately, users commit many errors when using SQL. To understand user errors, a model of query writing was developed that integrated a GOMS-type analysis of query writing with the characteristics of human cognition. This model revealed multiple cognitive causes of a frequent and troublesome error, join clause omission. This semantic user error returns answers from the database that may be undetectably wrong, affecting users, decision makers, and programmers.
   The model predicted four possible causes of join clause omission, and empirical testing revealed that all four contributed to the error. Specifically, the frequency of this error increased because (1) the load on working memory caused by writing intervening clauses made the users forget to include the join clause, (2) an explicit clue to write the join clause was absent from the problem statement, (3) users inappropriately reused the procedure appropriate for a single table query, which requires no join clause, when a join clause is indeed necessary, and (4) some users never learned the correct procedure. These results are significant for understanding user errors in general and for developing new interfaces and training schemes for the task of writing database queries.
There Was a Long Pause: Influencing Turn-Taking Behaviour in Human-Human and Human-Computer Spoken Dialogues BIBA 383-411
  Anne Johnstone; Umesh Berry; Tina Nguyen; Alan Asper
We report an experiment designed to compare human-human spoken dialogues with human-computer spoken dialogues. Our primary purpose was to collect data on protocols that were used to control the interaction. Three groups of subjects (35 total) were each asked to complete tasks over the phone. The experimental procedure was a new variation on the Wizard of Oz simulation technique that allowed much clearer comparisons to be made between human-human and human-computer interactions.
   Previous studies have shown that there are significant differences between human-human and human-computer interactions. While some effects can be attributed to the beliefs about computers the subjects bring to the task, others appear to be connected with the ongoing interaction styles of the speakers. Our study focuses on effects created by differences in interaction style. An important feature of the study is the use of two wizards, a technique which resulted in a realistically degraded communication channel.
   A second important feature of this study is the emphasis on computational models of spoken dialogue processing. One of the aims of Wizard of Oz studies is to identify language restrictions that will make the understanding task easier yet still be acceptable to the users. We observed that subjects could indeed successfully carry out their task with a restricted turn-taking protocol. More importantly, however, the experiment pointed us in the direction of a less restricted protocol and provided the data for a more sophisticated computational model of turn-taking.
   An important aspect of our study is the light it appears to shed on conflicting results in the literature. We discuss how these conflicts can be explained in terms of differences in interlocution style. We argue that ongoing interlocution style has a significant effect on the dialogue and over-rides a priori models of interlocutor ability.
CODE4: A Unified System for Managing Conceptual Knowledge BIBA 413-451
  Doug Skuce; Timothy C. Lethbridge
CODE4 is a general-purpose knowledge management system, intended to assist with the common knowledge processing needs of anyone who desires to analyse, store, or retrieve conceptual knowledge in applications as varied as the specification, design and user documentation of computer systems; the construction of term banks, or the development of ontologies for natural language understanding.
   This paper provides an overview of CODE4 as follows: We first describe the general philosophy and rationale of CODE4 and relate it to other systems. Next, we discuss the knowledge representation, specifically designed to meet the needs of flexible, interactive knowledge management. The highly-developed user interface, which we believe to be critical for this type of system, is explained in some detail. We finally describe how CODE4 is being used in a number of applications.
Bulletin BIB 453-463
 

IJHCS 1995 Volume 42 Issue 5

Acquisition and Exploitation of Gradual Knowledge BIBA 465-499
  Rose Dieng; Olivier Corby; Stephane Lapalut
Topoi are gradual inference rules, often used by experts in several types of problems: they can be exploited at various phases of a knowledge-based system life cycle. In this article, after justifying the interest of topoi, we try to answer the following question: how to help the knowledge engineer to elicit gradual knowledge from the expert and to exploit it? In this purpose, we distinguish two viewpoints for studying the notion of topoi, according to the two levels distinguished by Newell: the knowledge level and the symbol level. We formalize topoi through several qualitative physics formalisms. We exploit some knowledge elicitation techniques such as rating grids and some knowledge acquisition methods such as KADS and KOD in order to facilitate the acquisition of topoi. At the symbol level, we propose different ways for representing and implementing topoi. Last, we study the issues of topoi-based validation and we describe a possible topoi-based tool. Throughout the article, we illustrate our ideas by examples of topoi in traffic accident analysis.
Rethinking Video as a Technology for Interpersonal Communications: Theory and Design Implications BIBA 501-529
  Steve Whittaker
This paper re-assesses the role of real-time video as a technology to support interpersonal communications at distance. We review three distinct hypotheses about the role of video in the coordination of conversational content and process. For each hypothesis, we identify design implications and outstanding research questions derived from current findings. We first evaluate the non-verbal communication hypothesis, namely the prevailing assumption that the role of video is to supplement speech, and embodied in applications such as videoconferencing and videophone. We conclude that previous work has overestimated the importance of video at the expense of audio. This finding has strong implications for the implementation of such systems, and we make recommendations about both synchronization and bandwidth allocation. Furthermore our own recent studies of workplace interactions point to other communicative functions of video. Current systems have neglected another potentially vital role of visual information in supporting the process of achieving opportunistic connection. Rather than providing a supplement to audio information, video is used to assess the communication availability of others. Visual information therefore promotes the types of remote opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we discuss another novel application of video " video-as-data". Here the video image is used to transmit information about the work objects themselves, rather than information about interactants, creating a dynamic shared workspace, and simulating a shared physical environment. In conclusion we suggest that research move away from an exclusive focus on non-verbal communication, and begin to investigate these other uses of real-time video.
Cognitive Support: Designing Aiding to Supplement Human Knowledge BIBA 531-571
  H. P. de Greef; M. A. Neerincx
This article advocates, as alternative for either the "classical" technology- or user-centred approach, to focus on the joint human-computer task performance in system design. Human involvement can be improved by designing system functions which complement human knowledge and capacities. Based on general needs for cognitive support, an aiding function is proposed which -- in the process of task execution -- takes the initiative to present context-specific, procedural task knowledge. Design of such aiding comprises two aspects: design of software and design of a human-computer system. Modern model-based software engineering methods provide strong support for the design of software systems, but little support for modelling the human-computer interaction. Current model-based methods are extended to address human-computer interaction issues. The resulting method comprises the design of easy-to-use-and-learn interfaces providing, if needed, aiding. In a case study, the method is applied to design a conventional plain interface and an aiding interface for the statistical program HOMALS. In an experiment, users with minor HOMALS expertise prove to perform their tasks better and to learn more with the aiding interface.

IJHCS 1995 Volume 42 Issue 6

Special Issue: Real-World Applications of Uncertain Reasoning

Editorial: Real-World Applications of Uncertain Reasoning BIB 573-574
  David Heckerman; Abe Mamdani; Michael P. Wellman
Student Assessment Using Bayesian Nets BIBA 575-591
  Joel Martin; Kurt VanLehn
We describe OLAE as an assessment tool that collects data from students solving problems in introductory college physics, analyses that data with probabilistic methods that determine what knowledge the student is using, and flexibly presents the results of analysis. For each problem, OLAE automatically creates a Bayesian net that relates knowledge, represented as first-order rules, to particular actions, such as written equations. Using the resulting Bayesian network, OLAE observes a student's behavior and computes the probabilities that the student knows and uses each of the rules.
A Probabilistic Approach to Determining Biological Structure: Integrating Uncertain Data Sources BIBA 593-616
  Russ B. Altman
Modeling the structure of biological molecules is critical for understanding how these structures perform their function, and for designing compounds to modify or enhance this function (for medicinal or industrial purposes). The determination of molecular structure involves defining three-dimensional positions for each of the constituent atoms using a variety of experimental, theoretical and empirical data sources. Unfortunately, each of these data sources can be noisy or not available in sufficient abundance to determine the precise position of each atom. Instead, some atomic positions are precisely defined by the data, and others are poorly defined. An understanding of structural uncertainty is critical for properly interpreting structural models. We have developed a Bayesian approach for determining the coordinates of atoms in a three-dimensional space. Our algorithm takes as input a set of probabilistic constraints on the coordinates of the atoms, and an a priori distribution for each atom location. The output is a maximum a posteriori (MAP) estimate of the location of each atom. We introduce constraints as updates to the prior distributions. In this paper, we describe the algorithm and show its performance on three data sets. The first data set is synthetic and illustrates the convergence properties of the method. The other data sets comprise real biological data for a protein (the trp repressor molecule) and a nucleic acid (the transfer RNA fold). Finally, we describe how we have begun to extend the algorithm to make it suitable for non-Gaussian constraints.
Time Series Prediction using Belief Network Models BIBA 617-632
  Paul Dagum; Adam Galper
We address the problem of generating normative forecasts efficiently from a Bayesian belief network. Forecasts are predictions of future values of domain variables conditioned on current and past values of domain variables. To address the forecasting problem, we have developed a probability forecasting methodology, Dynamic Network Models (DNMs), through a synthesis of belief network models and classical time-series models. The DNM methodology is based on the integration of fundamental methods of Bayesian time-series analysis, with recent additive generalizations of belief network representation and inference techniques.
   We apply DNMs to the problem of forecasting episodes of apnea, that is, regular intervals of breathing cessation in patients afflicted with sleep apnea. We compare the one-step-ahead forecasts of chest volume, an indicator of apnea, made by autoregressive models, belief networks, and DNMs. We also construct a DNM to analyse the multivariate time series of chest volume, heart rate and oxygen saturation data.
Classifying Delinquent Customers for Credit Collections: An Application of Probabilistic Inductive Learning BIBA 633-646
  Ozden Gur-Ali; William A. Wallace
Probabilistic Inductive Learning (PrIL), a methodology that incorporates statistically determined measures of goodness with tree inductive algorithms from machine learning, has been applied to the credit collection operations of a major bank. Rules were induced from past customer account data to predict whether an account that is one month delinquent will be rectified or remain delinquent. Each of the rules has a reliability greater than that prescribed by management. The induced rules out-performed current scoring methods in predicting customer behavior on a holdout dataset. In addition, using the rules will result in matching current performance with 30% fewer contacts with delinquent customers.
Blocking Gibbs Sampling in Very Large Probabilistic Expert Systems BIBA 647-666
  Claus S. Jensen; Uffe Kjaerulff; Augustine Kong
We introduce a methodology for performing approximate computations in very complex probabilistic systems (e.g. huge pedigrees). Our approach, called blocking Gibbs, combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The methodology is illustrated on a real-world problem involving a heavily inbred pedigree containing 20000 individuals. We present results showing that blocking-Gibbs sampling converges much faster than plain Gibbs sampling for very complex problems.
Bayesian Inference-Based Fusion of Radar Imagery, Military Forces and Tactical Terrain Models in the Image Exploitation System/Balanced Technology Initiative BIBA 667-686
  T. S. Levitt; C. L. Winter; C. J. Turner; R. A. Chestek; G. J. Ettinger; S. M. Sayre
The Imagery Exploitation System/Balanced Technology Initiative (IES/BTI) inputs synthetic aperture radar (SAR) imagery and outputs probabilistically ranked interpretations of the presence and location of military force membership, organization, and expected ground formations. There are also probabilistic models of underlying terrain types from a tactical perspective that provide evidence supporting or denying the presence of forces at a location. The system compares sets of detected military vehicles extracted from imagery against the models of military units and their formations to create evidence of force type and location. Based on this evidence, the system dynamically forms hypotheses of the presence, location and formations of military forces on the ground, which it represents in a dynamically modified Bayesian network.
   The IES/BTI functional design is based on a decision theoretic model in which processing choices are determined as a utility function of the current state of interpretation of imagery and a top-level goal to exploit imagery as accurately and rapidly as possible, given the available data, current state of the interpretation of force hypotheses and the system processing suite.
   In order to obtain sufficient throughput in processing multi-megabyte SAR imagery, and also to take advantage of natural parallelism in 2D-spatial reasoning, the system is hosted on a heterogeneous network of multiple parallel computers including a SIMD Connection Machine 2 and a MIMD Encore Multimax.
   Independent testing by the US Army using imagery of Iraqi forces taken during Desert Storm, indicated an average 260% improvement in the performance of expert SAR imagery analysts using IES/BTI as a front end to their image exploitation.
Reactive Scheduling: Improving the Robustness of Schedules and Restricting the Effects of Shop Floor Disturbances by Fuzzy Reasoning BIBA 687-704
  Jurgen Dorn; Roger Kerr; Gabi Thalhammer
Practical scheduling usually has to reach to many unpredictable events and uncertainties in the production environment. Although often possible in theory, it is undesirable to reschedule from scratch in such cases. Since the surrounding organization will be prepared for the predicted schedule, it is important to change only those features of the schedule that are necessary.
   We show how, on one side, fuzzy logic can be used to support the construction of schedules that are robust with respect to changes due to certain types of event. On the other side, we show how a reaction can be restricted to a small environment by means of fuzzy constraints and a repair-based problem-solving strategy.
   We demonstrate the proposed representation and problem-solving method by introducing a scheduling application in a steelmaking plant. We construct a preliminary schedule by taking into account only the most likely duration of operations. This schedule is iteratively "repaired" until some threshold evaluation is found. A repair is found with a local search procedure based on Tabu Search. Finally, we show which events can lead to reactive scheduling and how this is supported by the repair strategy.
Bulletin BIB 705-712