HCI Bibliography Home | HCI Journals | About IJHCS | Journal Info | IJHCS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
IJHCS Tables of Contents: 404142434445464748495051

International Journal of Human-Computer Studies 41

Editors:B. R. Gaines
Dates:1994
Volume:41
Publisher:Academic Press
Standard No:ISSN 0020-7373; TA 167 A1 I5
Papers:39
Links:Table of Contents
  1. IJHCS 1994 Volume 41 Issue 1/2
  2. IJHCS 1994 Volume 41 Issue 3
  3. IJHCS 1994 Volume 41 Issue 4
  4. IJHCS 1994 Volume 41 Issue 5
  5. IJHCS 1994 Volume 41 Issue 6

IJHCS 1994 Volume 41 Issue 1/2

Special Issue: Object-Oriented Approaches in Artificial Intelligence and Human-Computer Interaction

Editorial: Object-Oriented Approaches in Artificial Intelligence and Human-Computer Interaction BIBA 1-3
  Hermann Kaindl
Object orientation is characterized by a blending of several concepts. Among other areas, roots can be found in Software Engineering and Artificial Intelligence (AI). Moreover, there are important relationships to certain aspects of Human-Computer Interaction (HCI). Recently, "object-oriented" (O-O) has become a buzz word, and many people believe that this paradigm has the potential to solve important issues.
   Unfortunately, the respective research communities emphasize more the differences than the communalities. This has lead to duplicate research efforts due to a lack of communication. Therefore, the main goal of this special issue is to focus on integration and mutual dissemination of results.
   When looking at the papers selected for this special issue, we can categorize them according to their contribution from one area to another area. Actually, some of the papers are in more than one of these categories. We can visualize the relationships between O-O, AI and HCI when assigning these areas to the vertices of a triangle.
An Object-Based Representation System for Organic Synthesis Planning BIBA 5-32
  Amedeo Napoli; Claude Laurenco; Roland Ducournau
In this paper, we present an application of object-based representation techniques to the design of a knowledge-based system for organic synthesis planning. The notion of an object-based representation system described in this paper relies on the integration of basic programming styles, such as message sending and access-oriented programming, with reasoning capabilities, such as classification. The basis for integration is the subsumption relation that is used jointly with inheritance to provide a two-dimensional universe in which representation and problem solving can be performed successfully.
Developing Integrated Object Environments for Building Large Knowledge-Based Systems BIBA 33-58
  Jean-Paul A. Barthes
This paper discusses the development of environments for supporting large persistent knowledge-based systems, based around the concept of object common to artificial intelligence, programming languages, and object-oriented databases. A prototype, MOSS, was constructed to test the possible difficulties that one could encounter. The MOSS example is used to report how various features taken from the different fields interact when assembled in a single system, as a consequence of global design choices.
Class Library Implementation of an Open Architecture Knowledge Support System BIBA 59-107
  Brian R. Gaines
Object-oriented class libraries offer the potential for individual researchers to manage the large bodies of code generated in the experimental development of complex interactive systems. This article analyses the structure of such a class library that supports the rapid prototyping of a wide range of systems including collaborative networking, shared documents, hypermedia, machine learning, knowledge acquisition and knowledge representation, and various combinations of these technologies. The overall systems architecture is presented in terms of a heterogeneous collection of systems providing a wide range of application functionalities. Examples are given of group writing, multimedia and knowledge-based systems which are based on combining these functionalities. The detailed design issues of the knowledge representation server component of the system are analysed in terms of requirements, the current state of the art, and the underlying theoretical principles that lead to an effective object-oriented implementation. It is shown that modeling the server through intensional algebraic semantics leads naturally to an open-architecture class library into which new data types may be plugged as required without change to the basic deductive engine. It is concluded that the development of a principled class library targeted on complex interactive applications does empower the individual researcher in the rapid prototyping of experimental systems. However, it is noted that much of the power of the approach stems from the cumulative evolution of the class library through successive applications, and hence the results may not generalize to team projects where greater rigidity is required in the class library in order to facilitate project management and inter-member coordination.
Managing Complex Objects in Peirce BIBA 109-148
  Gerard Ellis; Robert A. Levinson; Peter J. Robinson
The Peirce project (named after Charles Sanders Peirce) is an international collaborative project aiming to construct a freely available conceptual graphs workbench to support research in the conceptual graphs community in areas such as natural language processing, enterprise modelling, program specification and verification, management information systems, conceptual information retrieval, medical informatics, and construction of ontologies. Peirce advances the state of the art in conceptual graph implementations and in general complex object classification.
   At the core of the Peirce system is an abstract data type for partially ordered sets of objects (poset ADT). The poset ADT is used to organize a conceptual graph database. In this paper we give an overview of the innovative methods for complex object classification, and illustrate examples using complex object databases with hierarchies of chemical formulas, images and conceptual graph program specifications. We illustrate how conceptual graphs can be used for graphic programming in traditional domains and in organic chemistry and indicate how Peirce's complex object database supports these activities.
Default Inheritance in an Object-Oriented Representation of Linguistic Categories BIBA 149-177
  Walter Daelemans; Koenraad De Smedt
We describe an object-oriented approach to the representation of linguistic knowledge. Rather than devising a dedicated grammar formalism, we explore the use of powerful but domain-independent object-oriented languages. We use default inheritance to organize regular and exceptional behavior of linguistic categories. Examples from our work in the areas of morphology, syntax and the lexicon are provided. Special attention is given to multiple inheritance, which is used for the composition of new categories out of existing ones, and to structured inheritance, which is used to predict, among other things, to which rule domain a word form belongs.
Concurrent, Object-Oriented Natural Language Parsing: The ParseTalk Model BIBA 179-222
  Udo Hahn; Susanne Schacht; Norbert Broker
The ParseTalk model of concurrent, object-oriented natural language parsing is introduced. It builds upon the complete lexical distribution of grammatical knowledge and incorporates inheritance mechanisms in order to express generalizations over sets of lexical items. The grammar model integrates declarative well-formedness criteria constraining linguistic relations between heads and modifiers, and procedural specifications of the communication protocol for establishing these relations. The parser's computation model relies upon the actor paradigm, with concurrency entering through asynchronous message passing. We consider various extensions of the basic actor model as required for distributed natural language understanding and elaborate on the semantics of the actor computation model in terms of event type networks (a graph representation for actor grammar specifications) and event networks (graphs which represent the actor parser's behavior). Besides theoretical claims, we present an interactive grammar/parser workbench, a graphical development environment with various types of browsers, tracers, inspectors and debuggers, that has been adapted to the requirements of large-scale grammar engineering in a distributed, object-oriented specification and programming framework.
Types and Inheritance in Hypertext BIBA 223-241
  Mikael Snaprud; Hermann Kaindl
Modeling tasks and domains of the real world, and especially developing a completely formal representation of the models, is very difficult. However, modeling is a key activity in, for example, object-oriented analysis and in knowledge acquisition. Since premature attempts at formal representation reinforce the problem, we propose to use semiformal representations. While hypertext is mostly used as a medium between authors and readers, our emphasis is primarily on using it as a means of semiformal representation in the early stages of domain and task modeling. In fact, we have used hypertext as a mediating representation in the course of knowledge acquisition for building a knowledge-based (expert) system, mainly for the difficult process of modeling the relevant parts of the real world. According to our experience, hypertext provides a suitable external representation for supporting the cognitive processes involved in modeling. This intermediary representation becomes increasingly more formal through defining external and internal structure of the nodes, and through typing of links and nodes.
   This paper primarily addresses the issues of typing in hypertext. We elaborate on the concepts of typed links and typed nodes, and introduce typed partitions. Moreover, we define and utilize relationships between them. Our organization of hypertext is according to object-oriented ideas, e.g. using inheritance of text. While the usefulness of inheritance in object-oriented approaches to software engineering and knowledge representation is generally known, we show the benefits of inheritance for the semiformal representation in hypertext. In summary, the use of types and inheritance in hypertext is highly recommended according to our experience.
Binding Objects to Scenarios of Use BIBA 243-276
  John M. Carroll; Robert L. Mack; Scott P. Robertson; Mary Beth Rosson
Scenarios are a natural and effective medium for thinking in general and for design in particular. Our work seeks to develop a potential unification between recent scenario-oriented work in object-oriented analysis/design methods and scenario-oriented work in the analysis/design of human-computer interaction. We illustrate this perspective by showing: (1) how scenario questioning can be used to systematically interrogate the knowledge and practices of potential users, and thereby to create object-oriented analysis models that are psychologically valid; (2) how depicting an individual object's point-of-view can serve as a pedagogical scaffold to help students of object-oriented analysis see how to identify and assign object responsibilities in creating a problem domain model; and (3) how usage scenarios can be employed to motivate and coordinate the design implementation, refactoring and reuse of object-oriented software.
Bulletin BIB 277-281
 

IJHCS 1994 Volume 41 Issue 3

A Systematic Approach to Outline Manipulation BIBA 283-308
  Geeng-Neng You; Roy Rada
The outline (table of contents) of a document provides users with a hierarchical view of the document's logical structure. Outlines reflect a conceptual model and can serve as a cognitive aid in reading and writing hypertext. Outlines may have balance along three dimensions: skeletal, lexical, and semantic. When outlines contain balance, alternative views of the outline are readily imagined. Furthermore, hypertext systems can automatically generate alternative outlines with a modified, depth-first traversal of the hypertext links and nodes. Methods for building and maintaining balanced outlines have been successfully used by writers. Readers are aware of the potential of balance and alternative outlines but lack strategies for utilizing them.
An Empirical Study on End-Users' Update Performance for Different Abstraction Levels BIBA 309-328
  Hock Chuan Chan; Kwok Kee Wei; Keng Leng Siau
Recent laboratory experiments have shown a strong tendency that database users can perform better at the conceptual level than at the logical level. The experiments measured users' performance for the tasks of database design and database retrieval. Besides database design and retrieval, the third major database task is update. User performance for updates has not been measured. With the widespread availability of databases, updates will be done frequently by end-users. This task is gaining in importance as a measure of the usability of a database system.
   An experiment was conducted to measure the effect of different abstraction levels on user performance for updates. A conceptual level group used the entity relationship model with an entity relationship query language KQL, while a logical level group used the relational model with the standard relational language SQL. Performance was primarily measured by the accuracy of the update query. Secondary measures of time and confidence were also taken. The results showed that updates at the conceptual level were 15.4% more accurate and required only 57.8% of the time taken for logical level updates. The differences were statistically significant with p values of less than 0.03.
Applying Prolog Programming Techniques BIBA 329-350
  A. Bowles; D. Robertson; M. Vasconcelos; M. Vargas-Vera; D. Bental
Much of the skill of Prolog programming comes from the ability to harness its comparatively simple syntax in sophisticated ways. It is possible to provide an account of part of the activity of Prolog programming in terms of the application of techniques -- standard patterns of program development which may be applied to a variety of different programming problems. Numerous researchers have attempted to provide formal definitions of Prolog techniques but there has been little standardization of the approach and the computational use of techniques has been limited to small portions of the programming task. We demonstrate that techniques knowledge can be used to support programming in a wide variety of areas: editing, analysis, tracing, transformation and techniques acquisition. We summarize the main features of systems implemented by the authors for each of these types of activity and set these in the context of previous work, using a standard style of presentation. We claim that a techniques-based system which integrates these features would be worth more than the sum of its parts, since the same techniques knowledge can be shared by the different subsystems.
A Note on the Quantification of Computer Programming Skill BIBA 351-362
  Harold Stanislaw; Beryl Hesketh; Sylvia Kanavaros; Tim Hesketh; Ken Robinson
There are sound reasons for believing that expertise in computer programming consists of two components, which should both be of interest to employers. Time-based expertise corresponds to the conventional notion of expertise, and is a function solely of the time spent programming. Multiskilling expertise, by contrast, accrues through exposure to a variety of programming languages and tasks, and is related to the cognitive development of high-level programming schemata. This multidimensional model was tested by developing measures to quantify the diversity of programming language usage and the diversity of programming tasks, and then assessing programming skill in 206 computer programmers. As predicted, factor analysis identified two underlying factors. The actual amount of time spent programming and the time since first learning to program loaded highly on one factor ("time-based skill"), while the number of languages known, the diversity of language usage, and the diversity of programming tasks loaded highly on the second factor ("multiskilling"). The data also revealed that programmers tend not to keep abreast of new developments in their field. Thus, many programmers who are "expert" in the time-based sense risk obsolescence due to a lack of multiskilling expertise.
Domain and Task Representation for Tutorial Process Models BIBA 363-383
  R. H. Kemp; S. P. Smith
The teaching of procedural skills is a fruitful area of intelligent tutoring system research. Students often learn such skills most effectively by computer when a guided discovery mode is employed. So far, no general aids to developing associated software have emerged. A domain and task representation scheme is proposed that facilitates a cognitive modelling approach to the development of discovery learning systems. It is based on a classical formalism used in AI planning and utilizes recent modifications to the technique to allow hierarchical feedback to be included, efficiency to be improved, the important distinction between domain and task to be made, and task representation to be simplified.
Estimating the Number of Subjects Needed for a Thinking Aloud Test BIBA 385-397
  Jakob Nielsen
Two studies of using the thinking aloud method for user interface testing showed that experimenters who were not usability specialists could use the method. However, they found only 28-30% of known usability problems when running a single test subject. Running more test subjects increased the number of problems found, but with progressively diminishing returns; after five test subjects 77-85% of the problems had been found.
Mapping Domains to Methods in Support of Reuse BIBA 399-424
  John H. Gennari; Samson W. Tu; Thomas E. Rothenfluh; Mark A. Musen
In this paper, we characterize the relationship between abstract problem-solving methods and the domain-oriented knowledge bases that they use. We argue that, to reuse methods and knowledge bases, we must isolate, as much as possible, method knowledge from domain knowledge. To connect methods and domains, we define declarative mapping relations, and enumerate the classes of mappings. We illustrate our approach to reuse with the PROTEGE-II architecture and a pair of configuration tasks. Our goal is to show that the use of mapping relations leads to reuse with high payoff of saved effort.
Generation of Knowledge-Acquisition Tools from Domain Ontologies BIBA 425-453
  Henrik Eriksson; Angel R. Puerta; Mark A. Musen
Metalevel tools can support the software development process by automating the design of task- and application-specific tools. DASH is a metalevel tool that allows developers to generate domain-specific knowledge-acquisition tools from domain ontologies. Domain specialists use the knowledge-acquisition tools generated by DASH to instantiate the concepts and relationships defined in the domain ontologies. The output of the knowledge-acquisition tools is a collection of instances that constitute the knowledge base for a knowledge-based system.
   To automate the generation of appropriate tools, the DASH architecture uses a dialog-design module to produce a dialog structure that defines the target tool at the editor and window level. Given the dialog structure, a layout-design module completes the window layouts. DASH allows the developer to custom tailor the layout of the knowledge-acquisition tool for its users, and to store such modifications persistently so that they can be reapplied when the target tool is regenerated. The DASH implementation is based on a mapping problem-solving method that defines the tool-design steps. The DASH Development Environment (DDE) is an application-specific environment that supports the configuration of the mapping method and the maintenance of DASH. We have used DASH to generate several knowledge-acquisition tools for a broad range of application tasks.
Bulletin BIB 455-456
 

IJHCS 1994 Volume 41 Issue 4

Novice Programmer Errors: Language Constructs and Plan Composition BIBA 457-480
  Alireza Ebrahimi
Why do novice programmers have difficulties in programming, and what are the probable causes of these errors? This study analyses the role of Language Constructs comprehension, Plan Composition, and their relationship to each other as applied to novice programming errors. The experiment was conducted with 80 novice programmers who were divided into four groups of 20. Each of the groups enrolled in one of the following programming language courses: Pascal, C, FORTRAN, or LISP.
   The results of the study indicate that the misunderstanding of Plan Composition and semantic misinterpretation of Language Constructs are the two major causes of errors. In addition, the study has concluded that these errors are highly correlated.
Effects of Data Model and Task Characteristics on Designer Performance: A Laboratory Study BIBA 481-508
  Dinesh Batra; Solomon R. Antony
A laboratory experiment was conducted to compare designer performance in modelling user views using the relational and the entity relationship models. A user view is a form or a report used in an information system and is one of the sources of user requirements. Previous studies have not considered the effect of user view characteristics on designer performance. This study considered nine user views, which varied in two task characteristics: degree of nesting and derivation span. Degree of nesting is the number of nests in a user view, where a nest pertains to a group of attributes that is multivalued with respect to another group of attributes. Derivation span refers to the presence of attributes from different objects in the same view. Three levels each of degree of nesting and derivation span were considered. Subjects enrolled in a database class were trained in one of the two modelling approaches and were asked to conduct conceptual database design of a specified problem. Each view was graded using a predefined scheme. Results indicated that subjects using the entity relationship model took longer to complete the task but outscored subjects using the relational model. The degree of nesting emerged as a significant predictor of scores. The derivation span also seemed to account for the variation in scores although the effect was not statistically significant. The findings suggest that for novice designers the entity relationship is the appropriate choice for modelling user views. Further, the degree of nesting is an important indicator of the complexity of the user view.
An Integrated Framework for Task Analysis and Systems Engineering: Approach, Example and Experience BIBA 509-526
  K. Brathen; E. Nordo; K. Veum
Analysis and modelling of human-machine systems during the early phases of development are discussed in this paper. Analysis and simulation of a comprehensive discrete event behavioral model and a component model, including both human and machine parts, are emphasized. A model language called behavior graphs is described and its characteristics are discussed. Organizational aspects in a Command, Control, Communication and Information system for fast patrol boats are discussed as an example on how the modelling and simulation approach can be employed in a complex human-machine system development. The advantages and problems with employing such an approach are outlined.
Model-Based Communicative Acts: Human-Computer Collaboration in Supervisory Control BIBA 527-551
  Patricia M. Jones; Christine M. Mitchell
Supervisory control environments can be characterized as dynamic, complex. uncertain, and risky. The cognitive demands placed on human supervisory controllers are driven by the continual need for situation assessment (including active information seeking), active goal-setting and planning, and anticipatory as well as reactive control actions and compensating for abnormal system conditions. One way to improve the human-machine system is with intelligent support systems that provide context-sensitive displays, dialogue, and resources for activity management. The Georgia Tech Mission Operations Cooperative Assistant (GT-MOCA) is an example of such a system for NASA satellite ground control. The design of GT-MOCA is based on (1) principles of human-computer cooperative problem solving, partly derived from an analysis of human communication literature, (2) empirical study of the use of an existing real-time expert system for satellite ground control, and (3) the OFMspert architecture which provides dynamic intent inferencing organized around the operator function model. GT-MOCA provides three major resources for cooperative support: interactive graphics of system components, an inspectable and interactive visualization of current activity requirements, and message lists organized around major communicative functions such as advice and various types of alerts. This paper focuses on the representation of the communicative acts that are the underpinning of these message lists and details how such acts are integrated into the operator function model of activity. An analysis of GT-MOCA with respect to human communication literature and an empirical evaluation of GT-MOCA show that it does support relevant and timely interaction with human problem solvers and does provide some performance benefits.
A Taxonomy for Combining Software Engineering and Human-Computer Interaction Measurement Approaches: Towards a Common Framework BIBA 553-583
  Jenny Preece; H. Dieter Rombach
The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science.
   Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally, SE has been concerned with "how software is constructed" and HCI with "how people use software". Given the different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct "measurement culture".
   In this paper we analyse the differences and the commonalities of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.
Tasks and Ontologies in Engineering Modelling BIBA 585-617
  Jan Top; Hans Akkermans
Constructing models of physical systems is a recurring activity in engineering problem solving. This paper presents a generic knowledge-level analysis of the task of engineering modelling. Starting from the premise that modelling is a design-like activity, it proposes the Specify-Construct-Assess (SCA) problem-solving method for decomposition of the modelling task. A second structuring principle is found in the distinction between and separation of different ontological viewpoints. Here, we introduce three engineering ontologies that have their own specific roles and methods in the modelling task: functional components, physical processes, mathematical constraints. The combination of the proposed task and ontology decompositions leads to a particular approach to modelling that we call evolutionary modelling. This approach is supported by a knowledge-based system called QuBA. The implications of evolutionary modelling for structuring the modelling process, the content of produced models, as well as for the organization of reusable model fragment libraries are discussed.

Book review

"Computers, Communication and Usability: Design Issues, Research and Methods for Integrated Services," edited by P. F. Byerley, P. J. Barnard, and J. May BIB 619-623
  Hartmut Wandke; Marion Wittstock
Bulletin BIB 625-631
 

IJHCS 1994 Volume 41 Issue 5

MacSHAPA and the Enterprise of Exploratory Sequential Data Analysis (ESDA) BIBA 633-681
  Penelope Sanderson; Jay Scott; Tom Johnston; John Mainzer; Larry Watanabe; Jeff James
This paper discusses some of the problems associated with observational data analysis for complex domains, and introduces the term "exploratory sequential data analysis" (ESDA) to describe the different kinds of observational data analysis currently being performed in many areas of the behavioral and social sciences. The development and functionality of a software tool -- MacSHAPA -- for certain kinds of ESDA is described. MacSHAPA is designed to bring investigators into closer contact with their data and to help them achieve greater research productivity and quality. MacSHAPA allows investigators to see their data in various ways, to enter it, edit it and encode it, and to carry out statistical analyses and make reports. MacSHAPA's relation to other ESDA software tools is indicated throughout the paper.
A Methodology and Interactive Environment for Iconic Language Design BIBA 683-716
  S. K. Chang; G. Polese; S. Orefice; M. Tucci
We describe a design methodology for iconic languages based upon the theory of icon algebra to derive the meaning of iconic sentences. The design methodology serves two purposes. First of all, it is a descriptive model for the design process of the iconic languages used in the Minspeak systems for augmentative communication. Second, it is also a prescriptive model for the design of other iconic languages for human-machine interface. An interactive design environment based upon this methodology is described. This investigation raises a number of interesting issues regarding iconic languages and iconic communications.
Analyses of Factors Related to Positive Test Bias in Software Testing BIBA 717-749
  Laura Marie Leventhal; Barbee Eve Teasley; Diane Schertler Rohlman
In earlier work, we have shown that software testers exhibit positive test bias. Positive test bias is the pervasive behavioral phenomenon in which hypothesis testers tend to test a hypothesis with data which confirms the hypothesis. However, in software testing this behavior may be counter-productive, since it may be more effective to test with data which are designed to disconfirm the hypothesis.
   The first study considered how positive test bias is influenced by the expertise level of the subjects, the completeness of the software specifications and whether or not the programs contained errors. The results demonstrated strong evidence of positive test bias regardless of condition. The effects appear to be partially mitigated by increasingly higher levels of expertise and by increasingly more complete specifications. In some cases, the effect is also increased by the presence of errors. A second study used talk-aloud protocols to explore the kinds of hypotheses testers generate during testing. The results further emphasize that subjects test their programs in a biased way and support the notion that the program specification drives testers' hypotheses. We conclude that positive test bias is a critical concern in software testing and may have a seriously detrimental effect on the quality of testing. The results further emphasize the importance of complete and thorough program specifications in order to enhance effective testing.
Capturing Scheduling Knowledge from Repair Experiences BIBA 751-773
  Kazuo Miyashita; Katia Sycara; Riichiro Mizoguchi
In recent years, there have been a lot of efforts in solving scheduling problems by using the techniques of artificial intelligence (AI). However, through development of a variety of AI-based scheduling systems, it has become well known that eliciting effective problem-solving knowledge from human experts is arduous work, and that human schedulers typically lack the knowledge for solving large and complicated scheduling problems in a sophisticated manner. This paper discusses the characteristics of a scheduling problem and describes prior work on acquiring human schedulers' knowledge in scheduling expert systems. Then, a case-based approach, implemented in a system called CABINS, is presented for capturing human experts' preferential criteria about scheduling quality and control knowledge to speed up problem solving. Through iterative schedule repair, CABINS improves the quality of sub-optimal schedules, and during the process CABINS utilizes past repair experiences for (1) repair tactic selection and (2) repair result evaluation. It is empirically demonstrated in this paper that CABINS can revise a schedule along objectives captured in its case base and can improve the efficiency of the revision process while preserving the quality of a resultant schedule.
Alphanumeric Entry on Pen-Based Computers BIBA 775-792
  I. Scott MacKenzie; Blair Nonnecke; Stan Riddersma; Craig McQueen; Malcolm Meltz
Two experiments were conducted to compare several methods of numeric and text entry for pen-based computers. For numeric entry, the conditions were hand printing, tapping on a soft keypad, stroking a moving pie menu, and stroking a pie pad. For the pie conditions, strokes are made in the direction that numbers appear on a clock face. For the moving pie menu, strokes were made directly in the application, as with hand printing. For the pie pad, strokes were made on top of one another on a separate pie pad, with the results sent to the application. Based on speed and accuracy, the entry methods from best to worst were soft keypad (30 wpm, 1.2% errors), hand printing (18.5 wpm, 10.4% errors), pie pad (15.1 wpm, 14.6% errors), and moving pie menu (12.4 wpm, 16.4% errors).
   For text entry, the conditions were hand printing, tapping on a soft keyboard with a QWERTY layout, and tapping on a soft keyboard with an ABC layout (two rows of sequential characters). Tapping on the soft QWERTY keyboard was the quickest (23 wpm) and most accurate (1.1% errors) entry method. Hand printing was slower (16 wpm) and more error prone (8.1% errors). Tapping on the soft ABC keyboard was very accurate (0.6% errors) but was slower (13 wpm) than the other methods.
   These results represent the first empirical tests of entry speed and accuracy using a stylus to tap on a soft keyboard. Although handwriting (with recognition) is touted as the entry method of choice for pen-based computers, the much simpler technique of tapping on a soft keyboard is faster and more accurate.
Bulletin BIB 793-800
 

IJHCS 1994 Volume 41 Issue 6

Varieties of Knowledge Elicitation Techniques BIBA 801-849
  Nancy J. Cooke
Information on knowledge elicitation methods is widely scattered across the fields of psychology, business management, education, counseling, cognitive science, linguistics, philosophy, knowledge engineering and anthropology. The purpose of this review is to (1) identify knowledge elicitation techniques and the associated bibliographic information, (2) organize the techniques into categories on the basis of methodological similarity, and (3) summarize for each category of techniques strengths, weaknesses, and recommended applications. The review is intended to provide a starting point for those interested in applying or developing knowledge elicitation techniques, as well as for those more generally interested in exploring the scope of the available methodology.
Experiences with CLARE: A Computer-Supported Collaborative Learning Environment BIBA 851-879
  Dadong Wan; Philip M. Johnson
Current collaborative learning systems focus on maximizing shared information. However, "meaningful learning" is not simply information sharing but also knowledge construction. CLARE is a computer-supported learning environment that facilitates meaningful learning through collaborative knowledge construction. It provides a semi-formal representation language called RESRA and an explicit process model called SECAI. Experimental evaluation through 300 hours of classroom usage indicates that CLARE does support meaningful learning. It also shows that a major bottleneck to computer-mediated knowledge construction is summarization. Lessons learned through the design and evaluation of CLARE provide new insights into both collaborative learning systems and collaborative learning theories.
Situating Natural Language Understanding Within Experience-Based Design BIBA 881-913
  Justin Peterson; Kavi Mahesh; Ashok Goel
Building useful systems with an ability to understand "real" natural language input has long been an elusive goal for Artificial Intelligence. Well-known problems such as ambiguity, indirectness, and incompleteness of natural language inputs have thwarted efforts to build natural language interfaces to intelligent systems. In this article, we report on our work on a model of understanding natural language design specifications of physical devices such as simple electrical circuits. Our system, called KA, solves the classical problems of ambiguity, incompleteness and indirectness by exploiting the knowledge and problem-solving processes in the situation of designing simple physical devices. In addition, KA acquires its knowledge structures (apart from a basic ontology of devices) from the results of its problem-solving processes. Thus, KA can be bootstrapped to understand design specifications and user feedback about new devices using the knowledge structures it acquired from similar devices designed previously.
   In this paper, we report on three investigations in the KA project. Our first investigation demonstrates that KA can resolve ambiguities in design specifications as well as infer unarticulated requirements using the ontology, the knowledge structures, and the problem-solving processes provided by its design situation. The second investigation shows that KA's problem-solving capabilities help ascertain the relevance of indirect design specifications, and identify unspecified relations between detailed requirements. The third investigation demonstrates the extensibility of KA's theory of natural language understanding by showing that KA can interpret user feedback as well as design requirements. Our results demonstrate that situating language understanding in problem solving, such as device design in KA, provides effective solutions to unresolved problems in natural language processing.
Transforming Verbal Descriptions into Mathematical Formulas in Spreadsheet Calculation BIBA 915-948
  Pertti Saariluoma; Jorma Sajaniemi
A common subtask in spreadsheet calculation is the transformation of verbal task instructions into spreadsheet formulas. This task can be used to study the relation of imagery to thinking. Research using physics and mathematics problems has indicated that mental transformation from verbal to mathematical representations is not necessarily direct but is intermediated by imagery. Therefore, a human-computer interaction task such as spreadsheet calculation provides a good task environment for analysing mental imagery operations, the role of imagery operations, and the role of intermediate imagery in thinking tasks. Testing the use of imagery in spreadsheet calculations also improves our understanding of representational systems used in this specific task and in user interfaces in general.
   Four experiments provided different types of evidence for the intermediate imagery hypothesis, which means that subjects do not directly transform verbal instructions into spreadsheet formulas. They first try to code an overall image of the areas referred to by verbal instructions, segment it into suitable fields, and only thereafter do they write down the set of formulas which best extract the information demanded. Typically, the field borders, used in this segmentation are often imagined and are not presented at all in the original verbal task instructions.
   Intermediate imagery is a relevant notion in discussing the construction of user models because the most important current models, such as GOMS, assume only propositional representations. Also, the use of images should be taken into account in designing spreadsheet packages by providing features which aid analog information processing.
Improved Efficiency through I- and E-Feedback: A Trackball with Contextual Force Feedback BIBA 949-974
  Frits L. Engel; Peter Goossens; Reinder Haakma
It has been argued by Engel and Haakma (1993, Expectations and feedback in user-system communication, International Journal of Man-Machine Studies, 39, 427-452) that for user-system communication to become more efficient, machine interfaces should present both early layered I-feedback on the current partial message interpreration as well as layered expectations (E-feedback) concerning the message components still to be communicated. As a clear example of our claim, this paper describes an experimental trackball device that provides the user with the less common E-feedback in addition to the conventional layered I-feedback in the form of the momentary cursor position on the screen and the kinetic forces from the ball. In particular, the machine expresses its expectation concerning the goal position of the cursor by exerting an extra force to the trackball.
   Two optical sensors and two servo motors are used in the described trackball device with contextual force feedback. One combination of position sensor and servo motor handles the cursor position and tactile feedback along the x-axis, the other combination controls that along the y-axis. By supplying supportive force feedback as a function of the current display contents and the momentary cursor position, the user's movements are guided towards the cursor target position expected by the machine. The force feedback diminishes the visual processing load of the user and combines increased ease of use with robustness of manipulation.
   Experiments with a laboratory version of this new device have shown that the force feedback significantly enhances speed and accuracy of pointing and dragging, while the effort needed to master the trackball is minimal compared with that for the conventional trackball without force feedback.
Bulletin BIB 975-986