| A Voice- and Touch-Driven Natural Language Editor and its Performance | | BIBA | 1-21 | |
| Alan W. Biermann; Linda Fineman; J. Francis Heidlage | |||
| The performance of a voice- and touch-driven natural language editor is described as subjects used it to do editing tasks. The system features the abilities to process imperative sentences with noun phrases that may include pronouns, quantifiers and references to dialogue focus. The system utilizes a commercial speaker-dependent connected-speech recognizer, and processes sentences spoken by human subjects at the rate of five to seven sentences per minute. Sentence recognition percentages for our expert speaker and for subjects, were 98 and around the mid 70s, respectively. Subjects had more difficulty learning to use connected speech than had been the case in earlier experiments with discrete speech. | |||
| Design of Interactive Systems -- A Formal Approach | | BIBA | 23-46 | |
| Varghese S. Jacob; James C. Moore; Andrew B. Whinston | |||
| Decision Support Systems (DSSs) are utilized to support a users decision process. One generally required characteristic of a DSS is that it be an interactive system. Generally the degree of interaction between the human and the system is such that one can view the information processing activity as being performed by the human-computer information processor. Although DSSs are fairly commonly used, there has been very little work done to develop a formal basis for the design of such systems which take into account the interactive nature of problem solving. In this paper we propose a formal model for analysing the human-machine information processor. The model takes into account cost of performing information-gathering actions, communication costs and time constraints. We illustrate the application of the model within the domain of categorization. A special case of the categorization problem called the "only-correct-guesses-count" problem is defined and analyzed within the context of the model. | |||
| Understanding Scene Descriptions by Integrating Different Sources of Knowledge | | BIBA | 47-81 | |
| Fausto Giunchiglia; Carlo Ferrari; Paolo Traverso; Emanuele Trucco | |||
| The aim of this work is to describe a system, called NALIG (Natural-Language-driven Image Generator), able to understand natural language descriptions of object spatial configurations and draw on a graphic screen one of the infinitely many scenes consistent with the input. NALIG can be used interactively and at any step a multi-layered contextual analysis of the input is performed to detect and eliminate possible inconsistencies due to previous default choices. The system is described in its functionalities, internal structure and interactions among its sub-parts. | |||
| Conceptual Data Modelling in Database Design: Similarities and Differences between Expert and Novice Designers | | BIBA | 83-101 | |
| Dinesh Batra; Joseph G. Davis | |||
| This paper explores the similarities and differences between experts and novices engaged in a conceptual data modelling task, a critical part of overall database design, using data gathered in the form of think-aloud protocols. It develops a three-level process model of the subjects' behavior and the differentiated application of this model by experts and novices. The study found that the experts focused on generating a holistic understanding of the problem before developing the conceptual model. They were able to categorize problem descriptions into standard abstractions. The novices tended to have more errors in their solutions largely due to their inability to integrate the various parts of the problem description and map them into appropriate knowledge structures. The study also found that the expert and novice behavior was similar in terms of modelling facets like entity, identifier, descriptor and binary relationship, somewhat different in modelling ternary relationship, but quite different in the modelling of unary relationship and category. These findings are discussed in relation to the results of previous expert-novice studies in other domains. | |||
| Applications and Extensions of OWA Aggregations | | BIBA | 103-122 | |
| Ronald R. Yager | |||
| We discuss the idea of ordered weighting averaging (OWA) operators. These operators provide a family of aggregation operators lying between the "and" and the "or". We introduce two possible semantics associated with the OWA operator, the first being a kind of generalized logical connective and the second being a new type of probabilistic expected value. We suggest some applications of these operators. Among the applications we discuss are those involving multicriteria decision making under uncertainty and search procedures in games. We provide for a formulation of OWA operators that can be used in environments in which the underlying scale is simply an ordinal one. | |||
| Characterization of Comparative Belief Structures | | BIBA | 123-133 | |
| S. K. M. Wong; Y. Y. Yao; P. Bollmann | |||
| Comparative belief is a generalization of comparative probability. An axiomatic characterization of beliefs plays an important role in the development of a complete theory of beliefs. This paper provides an axiomatic system for belief relations. It is shown that within this system there are belief functions which almost agree with comparative beliefs. A better understanding of comparative beliefs will also alleviate some of the difficulties in the acquisition and interpretation of belief numbers. | |||
| Communicative Acts for Explanation Generation | | BIBA | 135-172 | |
| Mark T. Maybury | |||
| Knowledge-based systems that interact with humans often need to define their
terminology, elucidate their behavior or support their recommendations or
conclusions. In general, they need to explain themselves. Unfortunately,
current computer systems, if they can explain themselves at all, often generate
explanations that are unnatural, ill-connected or simply incoherent. They
typically have only one method of explanation which does not allow them to
recover from failed communication. At a minimum, this can irritate an end-user
and potentially decrease their productivity. More dangerous, poorly conveyed
information may result in misconceptions on the part of the user which can lead
to bad decisions or invalid conclusions, which may have costly or even
dangerous implications.
To address this problem, we analyse human-produced explanations with the aim of transferring explanation expertise to machines. Guided by this analysis, we present a classification of explanatory utterances based on their content and communicative function. We then use these utterance classes and additional text analysis to construct a taxonomy of text types. This text taxonomy characterizes multisentence explanations according to the content they convey, the communicative acts they perform, and their intended effect on the addressee's knowledge, beliefs, goals and plans. We then argue that the act of explanation presentation is an action-based endeavor and introduce and define an integrated theory of communicative acts (rhetorical, illocutionary, and locutionary acts). To illustrate this theory we formalize several of these communicative acts as plan operators and then show their use by a hierarchical text planner (TEXPLAN -- Textual EXplanation PLANner) that composes natural language explanations. Finally, we classify a range of reactions readers may have to explanations and illustrate how a system can respond to these given a plan-based approach. Our research thus contributes (1) a domain-independent taxonomy of abstract explanatory utterances, (2) a taxonomy of multisentence explanations based on these utterance classes and (3) a classification of reactions readers may have to explanations as well as (4) an illustration of how these classifications can be applied computationally. | |||
| Individual Differences in the Performance and Use of an Expert System | | BIBA | 173-190 | |
| Richard P. Will | |||
| This field investigation studied the use of an expert system technology to
gain some additional insight into specific behavioral implications for
information system designers. Twenty-eight engineers in an oil and gas
exploration and production company participated in this study by solving a well
pressure buildup analysis problem. Half of the subjects utilized a well test
interpretation expert system to assist them while the other subjects solved the
problem manually. The groups were balanced across age, cognitive style and
trait anxiety. Independent variables consisted of the expert system treatment,
dogmatism and experience with performing the task. Impact measures consisted
of decision confidence, decision quality, decision time, state anxiety and a
system success indicator for those subjects utilizing the expert system.
Although decision confidence was higher in the group utilizing the expert system, there was no corresponding increase in decision quality. Also, experts utilizing the expert system experienced an increase in state anxiety, and rated the expert system significantly worse than the novices did. This may imply that expert system technology may be more useful or appropriate to novices than experts. | |||
| The Role of Planning in Learning a New Programming Language | | BIBA | 191-214 | |
| Jean Scholtz; Susan Wiedenbeck | |||
| This paper reports on a protocol analysis of experienced programmers beginning to program in an unknown programming language. The data show that programmers have greatest difficulty with and spend by far the largest portion of their time engaged in planning activities. Many of the subjects produced working solutions to the problem they were given, using plans coming from their previous experience with other languages. Such plans failed to take good advantage of the features of the new language. From our data we present a model of planning in a new language which shows planning to be mostly a depth-first process which uses a top-down strategy of adopting plans which have proven successful in other languages, as well as a bottom-up strategy of searching for features in the new language which may suggest an appropriate plan. | |||
| Communication Knowledge for Knowledge Communication | | BIBA | 215-239 | |
| Yvonne Wærn; Sture Hagglund; Jonas Lowgren; Ivan Rankin; Tomas Sokolnicki; Anne Steinemann | |||
| Knowledge systems can be regarded as agents communicating between domain experts and end users. We emphasize the concept of "communication knowledge", distinct from the domain knowledge. Three aspects of communication knowledge are identified and research related to them presented. These are domain-related knowledge, discourse knowledge and mediating knowledge. This frame of reference is applied in the contexts of knowledge acquisition, user interface management in knowledge systems, text generation in expert critiquing systems and tutoring systems. We discuss the implications of the proposed framework in terms of implemented systems and finally suggest a future research agenda emanating from the analyses. | |||
| A User Enquiry Model for DSS Requirements Analysis: A Framework and Case Study | | BIBA | 241-264 | |
| Bay Arinze | |||
| The concept of Decision Support Systems (DSS) has been the focus of much discussion among information systems academicians and practitioners in recent years. One trend has been the increasing recognition of the term "DSS" as embodying not just a set of tools or technologies, but rather, signifying a change in the methodologies required in the semi-structured problem-solving domain. An important issue for DSS development is DSS requirements analysis, and several formalizations and methods have been proposed in various DSS design methodologies. This paper describes a method for performing DSS requirements analysis that is based on a model of user enquiries. This model may be used for a more focused investigation of user requirements, with the derived user enquiries subsequently translated into DSS specifications. It compares the method with those used by existing DSS methodologies, and illustrates its use via a case study involving the development and use of a Marketing DSS (MKDSS). The benefits of this enquiry-based approach are identified, and the intended contribution toward DSS methodology described. | |||
| A Catalog of Errors | | BIBA | 265-307 | |
| Jane M. Fraser; Philip J. Smith; Jack W., Jr. Smith | |||
| This paper reviews various errors that have been described by comparing human behavior to the norms of probability, causal connection and logical deduction. For each error we review evidence on whether the error has been demonstrated to occur. For many errors, the occurrence of a bias has not been demonstrated; for others, a bias does occur, but arguments can be made that the bias is not always an error. Based on the conclusions of this review, we caution researchers and practitioners in referring to well known biases and errors. | |||
| Empirical Verification of Effectiveness for a Knowledge-Based System | | BIBA | 309-334 | |
| Ajay S. Vinze | |||
| In the last decade, information centers (ICs) have been proven to be a
successful strategy for managing software resources of organizations. The
initial success of ICs has increased user expectations and demand for the
services offered but, because ICs are considered cost centers in most
organizations, there is growing pressure for them to accomplish more with fewer
resources. A knowledge-based system ICE, (Information Center Expert), has been
developed to assist users with software selection. The study reported here
focuses on the evaluation of ICE to determine users' perception of its
effectiveness. This experimental evaluation of ICE was conducted at the
University of Arizona's Center for the Management of Information (CMI), which
operates as an Information Center supporting faculty and students in the
College of Business. The use of student subjects in the experiment was deemed
appropriate because they are, in fact, the end users of the CMI information
center.
The verification of the effectiveness of ICE was attempted by conducting a laboratory experiment to test the comparative advantages of using ICE or CMI consultants to obtain assistance with software selection. The experiment was designed as a 2 x 2 factorial. The independent variables were users (beginner or advanced) and type of consultation process (ICE or CMI consultant). The dependent variable was a measure of consultation effectiveness. Instruments for classifying users and measuring effectiveness of a consultation process were developed and validated. | |||
| Explanation and Artificial Neural Networks | | BIBA | 335-355 | |
| Joachim Diederich | |||
| Explanation is an important function in symbolic artificial intelligence
(AI). For instance, explanation is used in machine learning, in case-based
reasoning and, most important, the explanation of the results of a reasoning
process to a user must be a component of any inference system. Experience with
expert systems has shown that the ability to generate explanations is
absolutely crucial for the user acceptance of AI systems. In contrast to
symbolic systems, neural networks have no explicit, declarative knowledge
representation and therefore have considerable difficulties in generating
explanation structures. In neural networks, knowledge is encoded in numeric
parameters (weights) and distributed all over the system.
It is the intention of this paper to discuss the ability of neural networks to generate explanations. It will be shown that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components (ECs). Connectionist semantic networks (CSNs), i.e. connectionist systems with an explicit conceptual hierarchy, belong to a class of artificial neural networks which can be extended by an explanation component which gives meaningful responses to a limited class of "How" questions. An explanation component of this kind is described in detail. | |||
| Using Temporal Logic to Support the Specification and Prototyping of Interactive Control Systems | | BIBA | 357-385 | |
| C. W. Johnson; M. D. Harrison | |||
| Accidents at Flixborough, Seveso, Bhopal, Three Mile Island, Windscale and Chernobyl have led to increasing concern over the safety and reliability of control systems. Human factors specialists have responded to this concern and have proposed a number of techniques which support the operator of such applications. Unfortunately, this work has not been accompanied by the provision of adequate tools which might enable a designer to carry it beyond the "laboratory bench" and on to the "shop floor". The following paper exploits formal, mathematically based specification techniques to provide such a tool. Previous weaknesses of abstract specifications are identified and resolved. In particular, they have failed to capture the temporal properties which human factors specialists identify as crucial to the success or failure of interactive control systems. They also provide the non-formalist with an extremely poor impression of what it would be to like to interact with potential implementations. Temporal logic avoids these deficiencies. It can make explicit the sequential information which may be implicit within a design. Executable subsets of this formalization support prototyping and this provides a means of assessing the qualitative "look and feel" of potential implementations. A variety of presentation strategies, including structural decomposition and dialogue cycles, have been specified and incorporated directly into prototypes using temporal logic. Prelog, a tool for the Presentation and REndering of LOGic specifications, has been developed and its implementation is described. | |||
| "Envisioning Information," by Edward Tufte | | BIB | 387-393 | |
| Alison Black | |||
| "Hypertext Concepts, Systems and Applications. Proceedings of the European Conference on Hypertext, INRIA, France, November 1990," edited by A. Rizk, N. Streitz, and J. Andre | | BIB | 387-393 | |
| M. Sharples | |||
| "Hypertext in Context," by C. McKnight, A. Dillon, and J. Richardson | | BIB | 387-393 | |
| Patricia Wright | |||
| "Psychology of Programming," edited by J.-M. Hoc, T. R. G. Green, R. Samurcay, and D. J. Gilmore | | BIB | 387-393 | |
| Jurgan Koenemann-Belliveau | |||
| Introduction. Structure-Based Editors and Environments | | BIB | 395-397 | |
| Lisa Neal; Gerd Szwillus | |||
| Interacting with Structure-Oriented Editors | | BIBA | 399-418 | |
| Sten Minor | |||
| Why have structure-oriented editors failed to attract a wider audience? Despite their obviously good qualities, they have almost exclusively been used for education and for experimental purposes in universities and research labs. In this paper a number of common objections raised against structure-oriented editors are quoted and commented upon. Many objections concern the interaction of such editors. Therefore the aspect of interaction in structure-oriented editors is analysed in more detail. We pin down the differences between interacting with text and structure-oriented editors, thus obtaining a deeper understanding of how structure-oriented editors can be improved to suit both naive and expert users. An analysis based on Norman's model for user activities is presented both for text editing and structure-oriented editing of programming languages. The analysis illustrates the trade-offs between structure-oriented editing and text editing of programs. It is also used to suggest some improvements to structure-oriented editor interaction in order to minimize the mental and physical effort required. The interaction problems have earlier been dealt with in hybrid editors, which combine structure-oriented editing and text editing in one system. This approach is also commented upon and discussed. Conceptual models are presented and compared for text editors, structure-oriented editors and hybrid editors. An interaction model for structure-oriented editors based on direct manipulation is suggested. The model is examined in terms of semantic distance, articulatory distance, and engagement as suggested by Hutchins et al. It is also related to the analysis of user activities and the discussion of conceptual models. The direct manipulation model aims at obtaining a simple but powerful interaction model for "pure" structure-oriented editors that may he appreciated by different user categories. Finally, some objections against structure-oriented editors not concerning interaction issues are commented upon, and some directions for future research are outlined. | |||
| Conceptual Issues in Language-Based Editor Design | | BIBA | 419-430 | |
| Jim Welsh; Mark Toleman | |||
| User interface choices are vital to the success of language-based editors. This paper presents a case-study of some significant user interface choices made in the design of language-based editors for software development at the University of Queensland, and discusses the conceptual models on which the choices are based. | |||
| Coherent User Interfaces for Language-Based Editing Systems | | BIBA | 431-466 | |
| Michael L. Van De Vanter; Susan L. Graham; Robert A. Ballance | |||
| Many kinds of complex documents, including programs, are based on underlying formal languages. Language-based editing systems exploit knowledge of these languages to provide services beyond the scope of traditional text editors. To be effective, these services must use the power of language-based information to broaden the options available to the user, but without revealing complex linguistic and implementation models. Users understand complex documents in terms of many overlapping structures, only some of which are related to linguistic structure. Communications with the user concerning document structures must be based on models of document structure that are natural, convenient and coherent to the user. Pan is a language-based editing and browsing system designed to support development and maintenance of complex software documents. Pan's implementation combines several approaches: unrestricted text editing, language-based browsing and editing, description-driven language definition for incremental analysis and support for multiple languages per session. Pan uses a variety of mechanisms to help users understand and manipulate complex documents effectively, in terms of underlying language when necessary, but always in the framework of a coherent, user-oriented interface. This paper describes that interface, the mechanisms needed to support it, and the complex relationships between interface design and implementation techniques demanded by the goals of the system. | |||
| Design and Structure of a Semantics-Based Programming Environment | | BIBA | 467-479 | |
| R. Bahlke; G. Snelting | |||
| We present, from a user's point of view, an overview of the PSG system, a generator for semantics-based programming environments. The PSG system generates an interactive, language-specific environment from a complete formal language definition. Both the syntax, as well as the static and dynamic semantics of the language are specified in the definition. The definition is used to generate a context-sensitive hybrid editor and an interactive interpreter with debugging facilities. The paper describes the structure and the main features of PSG-generated environments, as well as the design decisions which led to the development of the PSG environment. | |||
| Diagram Editors = Graphs + Attributes + Graph Grammars | | BIBA | 481-502 | |
| Herbert Gottler | |||
| This paper reports on the latest developments in ongoing work which started in 1981 and is aimed at a general method which would help to reduce considerably the time necessary to develop a syntax-directed editor for any given diagram technique. In joint projects between the University of Erlangen-Nurnberg and software companies it has been shown that the ideas and the implemented tools can also be used for the design of CAD-systems. Several editors for diagram techniques in the field of software engineering have been implemented (e.g. SDL and SADT). In addition, 3-D-modelling packages for interior design and furnishing or lighting systems have been developed. The main idea behind the approach is to represent diagrams by (formal) graphs whose nodes are enriched with attributes. Then, any manipulation of a diagram (typically the insertion of an arrow, a box, text, coloring etc.) can be expressed in terms of the manipulation of its underlying attributed representation graph. The formal description of the manipulation is done by programmed attributed graph grammars. The main advantage of using graph grammars is the unified approach for the design of the data structures and the representation of the algorithms as graphs and graph productions, respectively. The results proved that graph grammars are a software-engineering method of their own. | |||
| Re-Structuring the Programmer's Task | | BIBA | 503-527 | |
| Rachel K. E. Bellamy; John M. Carroll | |||
| It is increasingly common for programming environments to provide a library of re-usable code components. Programmers build their programs by piecing together these components and, when necessary, specializing them or creating new components. Thus, finding and composing components become central programming tasks. In this paper, we analyse the Smalltalk/V environment with respect to these programming tasks and develop a redesign in which code components can be borrowed and manipulated under the task-oriented rubric of projects. | |||
| Automated Customization of Structure Editors | | BIBA | 529-563 | |
| Barbara Staudt Lerner | |||
| A common method of developing structure editors is by generating them using
an environment generation tool, such as Gandalf or the Synthesizer Generator.
One weakness of this approach is that the user interfaces to the generated
structure editors tend to be difficult to customize to individual languages and
users. Customization is typically left to the user with macros and mode
settings. An alternative described in this paper is automated customization.
To support automated customization, the system must determine what
customizations to perform, when to perform them, and how to evaluate them
without user intervention. This paper reports on mechanisms added to the
Gandalf environment generation system to support automated customization, as
well as results of experimentation with these mechanisms.
The major results of experimentation are the following. Automated customization resulted in a 7% decrease in the number of commands required to complete a task, and up to 25% reduction in the number of errors encountered. In addition the evaluation mechanism performed well, correctly evaluating 95% of the automated actions. | |||
| Interface Structures: Conceptual, Logical, and Physical Patterns Applicable to Human-Computer Interaction | | BIBA | 565-593 | |
| Siegfried Treu | |||
| Structural patterns are known to be important to human memory and cognition. They are essential to the knowledge representation of conceptual, logical and physical entities. At the same time, computer interfaces are implemented using a variety of logical and physical structures that have implications for human use. These two categories of structures, representing both sides of the human-computer partnership, are characterized and compared. Emphasis is on identifying a basic set of structures amenable both to the user's mind and to the computer-based application. The user is capable of conceiving and visualizing structures in each of several different representation "spaces" behind the interface surface. Careful structural mappings between these spaces and the user-visible interface are essential. To aid interface designers in these tasks, a formal definition of interface structure is proposed and an example specification is presented. Expected benefits and required research are discussed. | |||
| Stabilizing Student Knowledge in Open Structured CAI | | BIBA | 595-612 | |
| Yoneo Yano; Akihiro Kashihara; William McMichael | |||
| This paper describes STAR, a system for stabilizing student knowledge in a mixed initiative CAI application. The system characterizes knowledge as existing at three levels of stability: required, ambiguous or stable. The primary instructional objective of the system is to improve the stability level of student knowledge. The knowledge acquisition environment utilizes target domain knowledge and knowledge acquired from the student to generate a dynamic student model. The student model, in turn, guides the selection of appropriate instructional strategies. The operation of the system is demonstrated using an English vocabulary development activity. | |||
| A Protocol-Based Coding Scheme for the Analysis of Medical Reasoning | | BIBA | 613-652 | |
| Frank Hassebrock; Michael J. Prietula | |||
| One of the most common methods of codifying and interpreting human knowledge is through the use of verbal protocol analysis. Although the application of this methodology has increased in recent years, few detailed examples are readily available in the literature. This paper discusses the theoretical issues and methodological procedures pertaining to the analysis of verbal protocols collected from physicians engaged in medical problem solving. We first present a brief historical perspective on verbal protocol methodology. We then discuss how we have come to view the task of medical diagnosis both in general and in particular with respect to a specific specialty -- congenital heart disease. Next, we describe and provide examples of our methodology for coding verbal protocols of physicians into abstract, but meaningful objects which are elements of a theory of diagnostic reasoning. In particular, we demonstrate how the coding scheme can represent an important aspect of medical problem solving behavior called a line of reasoning. We conclude by proposing how such analysis is important to understanding the psychology of medical problem solving and how this type of analysis plays an important role in the development of medical artificial intelligence systems and educational efforts directed toward the development of expertise in medical problem solving. | |||
| Effects of Semantic Similarity, Omission Probability and Number of Alternatives in Computer Menu Search | | BIBA | 653-677 | |
| Byron J. Pierce; Stanley R. Parkinson; Norwood Sisson | |||
| An experiment was conducted to assess the influence of semantic relatedness, omission probability and number of alternatives on search strategy and response accuracy in computer menu selection. Search strategies were defined as either self-terminating, exhaustive, or redundant and a direct measure of search type was provided in a condition employing sequential presentation of menu alternatives. A simultaneous condition was included to test the generality of results obtained with sequential presentation. Regression analyses indicated that semantic relatedness, omission probability and number of alternatives were all significant predictors of search strategy and response accuracy. Mode of presentation, sequential or simultaneous, was not significant in any of the analyses. | |||
| Menu Search and Selection Processes: A Quantitative Performance Model | | BIBA | 679-702 | |
| Byron J. Pierce; Norwood Sisson; Stanley R. Parkinson | |||
| A criterion-based model is proposed that accounts for variations in search
strategies and response accuracy in a computer menu search task. Factors
considered by the model are (a) user-perceived relationships among target items
sought and menu alternatives available for selection, (b) number of
alternatives available for selection and (c) the probability of an omission
situation where the target item is not subsumed under any of the alternatives
available for selection. Results reported in Pierce, Parkinson and Sisson
(1992) (Int. J. Man-Machine Studies, 37, 653-677) showed that all three factors
significantly influenced menu search and response accuracy. A data-fitting
exercise is described in which search strategy and response accuracy data were
fitted to model prediction functions by estimating best fitting values for
model criteria. It is shown that processes suggested by the model are
consistent with the majority of the findings obtained from analyses of menu
task performance data. Note: Errata on Table 2 (p. 685) of this paper in V. 38, N. 6, pp. 1057-1058. | |||
| Feedback Requirements for Automatic Speech Recognition in the Process Control Room | | BIBA | 703-719 | |
| C. Baber; D. M. Usher; R. B. Stammers; R. G. Taylor | |||
| Automatic Speech Recognition (ASR) has great potential for use in control room systems; to date, there has been little research into the human factors issues this raises. For example, careful consideration needs to be given to the provision of adequate feedback to the user. We concentrate on the two main types of visual feedback: textual and symbolic. Two studies reported here show that little difference exists between them in user performance on a task requiring spoken control of a process. However, the results demonstrate a significant reduction in learning time when textual and symbolic feedback are combined. We defined the correction of device misrecognitions as a verbal decision task, for which Study 1 shows that textual feedback is most appropriate. However, Study 2 shows that textual feedback is more likely to be misunderstood than either symbols or a combination of text and symbols. A combination of both text and symbols is proposed as the most efficient form of feedback for the use of ASR in control room systems. | |||
| Eliciting Semantic Relations for Empirically Derived Networks | | BIBA | 721-750 | |
| Nancy J. Cooke | |||
| Knowledge elicitation is a critical, yet difficult, process in the development of knowledge-based systems. Pathfinder, a network scaling technique that elicits and represents knowledge in the form of graph structures, has been proposed as a means of overcoming some of the difficulties of other elicitation techniques. However, Pathfinder networks are limited in that their links represent associative, but not semantic, information about conceptual relations. This research addresses the problem of eliciting semantic relations in order to enrich the Pathfinder network representation and increase its potential as a knowledge-elicitation technique. In this paper the SCAN (Sorting, Clustering and Naming) methodology is described and illustrated using links in a network of 20 common concepts. SCAN is also applied to a network of programming concepts. Finally, the methodology is evaluated and compared to related methodologies. Results of these studies indicate that SCAN is a promising link-labelling methodology. | |||
| MCC -- Multiple Correlation Clustering | | BIBA | 751-765 | |
| J. R. Doyle | |||
| A clustering algorithm is described which is powerful, in that at each iterative step of the method global information is used to constrain the algorithm's convergence towards a solution. It is stable in the face of missing data in the input; it is efficient in that it will extract a small signal from a lot of noise; it is impervious to multicolinearity; it may be used in two-way clustering. Each of these claims is illustrated by its application to different data sets. Despite these advantages, the algorithm is easy to implement and understand: it is sufficient to know what a correlation coefficient is in order to understand the guts of the algorithm. Because the program repeatedly correlates correlation matrices it is called here Multiple Correlation Clustering, or MCC for short. | |||
| A Comparison of the Effects of Icons and Descriptors in a Videotex Menu Retrieval | | BIBA | 767-777 | |
| James N. MacGregor | |||
| The paper describes an experiment to test whether icons improve performance
with computer menus because of (a) the additional information they provide, or
(b) inherent pictorial properties. The experiment used three different
versions of menu pages adapted from a national videotex system. In one
version, the menus consisted of labels only, in the second, of the labels plus
text descriptors, and in the third, of the labels plus icons. The results
indicated that adding icons to videotex menus had the same effect as adding
equivalent textual descriptors. Neither reduced response times, while both
reduced errors by the same amount (40%). Furthermore, the effect of both icons
and descriptors was entirely attributable to a reduction in a specific type of
error. In the absence of either icons or descriptors subjects frequently
failed to recognize any of the menu options as relevant, including the correct
one, and wrongly selected a "none of the above" option. Adding descriptors and
icons appeared to specify the contents of categories sufficiently to reduce
this type of error.
However, caution should be exercised in interpreting the results. The study used specific sets of icons and descriptors, and the results may not generalize to other sets of icons or descriptors. Similarly, the study used videotex information-retrieval menus and the results may not generalize to software command menus and other applications. | |||
| Menu Organization through Block Clustering | | BIBA | 779-792 | |
| Mark S. Shurtleff | |||
| Block Clustering is proposed as a method to derive the semantic relationships among entities that comprise application programs, operating system environments or programming environments. The procedure was derived from the literature on clustering in the biological sciences. The advantages of the procedure are that it relies on information obtained from design specifications, eliminating difficulties associated with designing through "expert" opinion, and allowing the procedure to be implemented early in the design of a system. Also the procedure works well with systems possessing a small or a large number of entities. Two cases studies of the Block Clustering method are presented for two different fourth generation programming languages. Specifically the 37 HyperTalk 1.5 properties were block clustered and found to have six menu topics. Also the 98 SuperTalk 1.5 properties were Block Clustered and found to have 11 menu topics. The results illustrate the utility of Block Clustering analysis for both small and large systems. | |||
| A Decision Theoretic Framework for Approximating Concepts | | BIBA | 793-809 | |
| Y. Y. Yao; S. K. M. Wong | |||
| This paper explores the implications of approximating a concept based on the Bayesian decision procedure, which provides a plausible unification of the fuzzy set and rough set approaches for approximating a concept. We show that if a given concept is approximated by one set, the same result given by the α-cut in the fuzzy set theory is obtained. On the other hand, if a given concept is approximated by two sets, we can derive both the algebraic and probabilistic rough set approximations. Moreover, based on the well known principle of maximum (minimum) entropy, we give a useful interpretation of fuzzy intersection and union. Our results enhance the understanding and broaden the applications of both fuzzy and rough sets. | |||
| Panel Session at the HCI'92 Conference, York, UK: "HCI -- Where's the Practice" | | BIB | 811-821 | |
| Clive Warren | |||
| Activity Theory: The New Direction for HCI? "Designing Interaction: Psychology at the Human-Computer Interface," edited by J. M. Carroll | | BIB | 811-821 | |
| Stephen W. Draper | |||
| Activity Theory: The New Direction for HCI? "Through the Interface: A Human Activity Approach to User Interface Design," by S. Bødker | | BIB | 811-821 | |
| Stephen W. Draper | |||