HCI Bibliography Home | HCI Conferences | CHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
CHI Tables of Contents: 8182838586878889909192X

Proceedings of the joint conference on Easier and more productive use of computer systems

Fullname:Proceedings of the Joint Conference on Easier and More Productive Use of Computer Systems
Editors:Lorraine Borman
Location:Ann Arbor, Michigan, USA
Dates:1981-May-20 to 1981-May-22
Publisher:ACM
Standard No:ISBN 0-89791-056-7; ACM Order Number: 608810 (v. I) ISBN 0-89791-064-8; ACM Order Number: 608811 (v. II); ACM DL: Table of Contents hcibib: CHI81; acmdl: 800276
Papers:78
Pages:75 149
  1. CHI 1981-05-20 Volume 1
  2. CHI 1981-05-20 Volume 2

CHI 1981-05-20 Volume 1

Comparison of some available packages for use in research data management BIBAFull-Text 1-8
  Edward A. Greenberg; Wm. Max Ivey; Bruce R. Lewis
Data management features of SIR, SAS, and SPSS were applied to a sample hierarchical data base. For each package, the areas investigated included the logical definition of the data base, data entry, data retrieval, data integrity, security, reporting, and updating.
Organizing the annual housing surveys as a very large relationally oriented data base BIBAFull-Text 9-15
  Andrew A. Beveridge; Jennifer A. Norris
Since 1973, the Department of Housing and Urban Development, through the Bureau of the Census, has conducted a yearly nationwide survey of housing. Data on a wide range of topics are collected during face to face interviews with over 190,000 individuals. Plainly, the Annual Housing Surveys represent one of the largest longitudinal general social and economic data collection efforts ever undertaken.
   Due to changing policy and substantive interests, as well as government requirements, the interview schedules have changed significantly from year to year. Since the great potential for data from the Annual Housing Survey is in longitudinal analysis, it is necessary to have common variable definitions and consistent formats.
   To accomplish this, we have developed and implemented a system which includes: 1) documentation of the variables, questionnaires, and files across all years and surveys; 2) files created using one homogeneously defined data structure; 3) a simple system to produce custom user files; 4) a method to easily produce routine custom analyses and tabulations using the data.
   We have applied the relational model to create a small data base which documents the interview schedules, files and variable definitions. From this we produce up to date documentation and computer programs which are used to update the Annual Housing Survey data base, to handle custom file requests, and to perform analyses.
The 1940 and 1950 Public Use Sample Project: Data quality issues BIBAFull-Text 16-19
  Richard M. Cohn; Howard R. Prouse
The 1940 and 1950 Public Use Sample Project is the creation of 1/100 household samples from the 1940 and 1950 Censuses of Population. The data source for the samples is the microfilmed original Population Schedules which contain the census enumerator's recording of household information. The procedure to sample the universe of household listings and transcribe the sample households' data is described in the paper. A pretest of the 1940 Public Use Sample included a comparison of three methods of sampling and transcription. The results of this comparison are reported. The applicability of these procedures to similar projects is discussed.
The automation of data processing, analysis, and reporting in a large survey time-series database. BIBAFull-Text 20-23
  Christopher J. Gordon; Michael B. Zartman
The May 1981 Survey will mark the 152nd Survey of Consumer Attitudes. Initiated in 1946, the purpose of the surveys is to measure changes in consumer attitudes and expectations, to understand why these changes occur, and to evaluate how they relate to consumer decisions to save, to borrow, or to make discretionary purchases under changing conditions.
   Each survey contains approximately 40 core questions, each of which probes a different aspect of consumer confidence. Open-ended questions are asked concerning evaluations of expectations about personal finances, employment, price changes, and the national business situation. Additional questions probe for the respondents appraisal of present market conditions for houses, and other durables. Demographic data obtained in these surveys include income, age, sex, race, education, and occupation, among others. While many questions designed to measure change in attitudes and behavior are repeated in identical form in each survey, special questionnaire supplements are added to most surveys by outside sponsors on a time share basis. Supplements to the ongoing surveys give sponsors prompt turnaround to survey materials while taking advantage of shared field expenses. When the research task is first undertaken, a maximum amount of time and effort can be spent in developing these survey materials, not in establishing and setting in motion standard sampling and interviewing procedures, questionnaire and code development for standard demographic items, and so forth.
   Although each survey task is unique in its time requirements, shared time participation on the ongoing Surveys of Consumer Attitudes is an effective and flexible approach for meeting many research needs. Current procedures include production of a fully documented computer data file available for analytic use within 48 hours of the close of interviewing. Within one week of the close of the survey, a report containing tabulations and charts of questions asked is sent to the sponsors.
A new process for documenting and checking archival data BIBAFull-Text 24-31
  Erik W. Austin; Sylvia J. Barge; Susan M. Horvath; Santa M. Traugott
The Inter-university Consortium for Political and Social Research (ICPSR) is a data archive and repository for social science data. A major function of the ICPSR is to disseminate the data holdings in a reasonably standard format. For holdings that will be extensively used, additional effort is made to prepare comprehensive, machine-readable documentation, to cross-check the documentation against the data for accuracy and consistency, and to correct or document any inconsistencies discovered.
   In the past, "cleaning" and documenting the data involved using a number of different computer programs. A great deal of human time was expended on procedural matters: which programs to use, when to use them, and how to coordinate the various stages of the cleaning process. As staff costs and the number of new acquisitions skyrocketed, and computers increased in power and decreased in cost, it became imperative to automate as much as possible the procedure for preparing data for distribution. The GIDO software was developed to meet this need.
   GIDO is an interactive multi-function program package that guides staff members through the procedure for documenting and cleaning social science data. A cohesive history of the processing operations performed on the data is maintained automatically in machine-readable form. Video terminals are used to display "forms" which the staff fill out with the textual and technical documentation for the data. GIDO immediately verifies the contents of each form and provides an opportunity to make corrections. The forms allow the input of information without requiring knowledge of specialized syntax and conventions. After all documentary materials have been entered, GIDO checks the data for consistency with the original documentation, corrects or flags discrepancies encountered, reformats the data using uniform conventions, and produces machine-readable documentation in a form ready for dissemination.
   Use of GIDO enables the ICPSR archive to perform its data processing functions more efficiently and at lower cost, thus permitting the organization to meet ever-increasing demands on its resources.
An automated system for responding to data service requests BIBAFull-Text 32-35
  Tina G. Bixby; Janet K. Vavra
During the 1970s, there was a steady decline in the cost and size of computing hardware with a corresponding phenomenal growth in computing capability. Computers now help store, manage, duplicate and interpret vast quantities of data with an ease and relative economy undreamed of in the past. These developments have, over the years, fostered the growth of new research methods in a variety of fields, including the social sciences. Large and more complex bodies of quantitative data have been collected as social scientists seek ways to understand human behavior with empirical research methods and scientific sampling techniques. In addition to collecting their own data, researchers have also utilized vast amounts of machine-readable data that have been prepared by other researchers, governmental agencies, and private organizations.
   The changes in the computing industry combined with the increased demand for services have made it feasible for organizations to consider automating as many tasks as possible. FAST (Facility to Aid Servicing Transactions) is one system that was created in response to these conditions. This paper describes FAST and its impact on the organization which developed it.
Online searches of social science data sets: The RIQS system and ICPSR data BIBAFull-Text 36-44
  Ann Janda; Kenneth Janda
Every solution seems to generate a new problem. The problem of accurately assessing public opinion led to the invention of the sample survey. The subsequent problem of analyzing survey responses brought widespread use of machine-readable data. The problem of preserving machine-readable data for secondary analysis stimulated the creation of data depositories or "archives." Growth over time in the holdings of these social science data archives, however, has aroused needs for improved retrieval of data. This paper explains one method of dealing with such needs. It involves an interactive search of the holdings of the most diversified social science data archive, the Inter-University Consortium for Political and Social Research, using a general-purpose information retrieval system, RIQS, written for CDC computers.
Developing an aggregated survey/macro-economic database for statistical and graphical social science applications BIBAFull-Text 45-47
  Michael B. Zartman; Christopher J. Gordon
The Survey Research Center at The University of Michigan has routinely conducted surveys of consumer attitudes since 1946. The May 1981 survey is the 152nd in this series which provides regular assessments of consumer attitudes and expectations. The surveys are designed to explore why changes in consumer attitudes and expectations occur, and how these changes influence consumer spending and saving decisions. A major research objective of the project is to use this collected data to evaluate economic trends and prospects.
   Each survey contains "standard" questions asked at regular intervals, many of which have been included from the project's inception. The aggregated results of these surveys provide a wealth of time-series data with the potential to be an important factor in forecasting consumer behavior. The "standard" questions themselves can be disseminated into approximately 190 separate data series (including index transformations). When "nonstandard" (or non-core) questions are included, this total jumps considerably. With such a large number of data variables, many different areas of analysis are available to be researched. When the many macro-economic data series (e.g., Federal Reserve, Census, or Retail Sales data) are added to this compilation, the data management problems increase. The research results which could be achieved, then, are directly related to the development of a flexible method of data storage and retrieval.
On-line manipulation of small area demographic data: AmericanProfile sm BIBAFull-Text 48-57
  Garry S. Meyer
A new approach to the way in which users interact with the computer in an on-line environment is presented. The method is designed specifically to provide both a friendly and highly productive means of communicating user requirements. Unlike systems which are targeted toward either novice computer users or experienced programmers, the approach we take is well suited for all levels along this continuum. AmericanProfile sm, a system which provides access to demographic and economic data for both standard geo-political units of analysis (states, counties, SMSA's, Zip codes, etc.) and unique small areas (polygons, circles, etc.), is discussed as an actual case point to illustrate our approach.
Bibliometric analysis of American history data by FAMULUS BIBAFull-Text 58
  Miranda Lee Pao
NOVA*STATUS is a text-retrieval system which is available at all Norwegian universities and several government institutions, running on computers from different manufacturers.
   NOVA*STATUS is a full-text retrieval system originally developed by AERE, Harwell, England, and redeveloped at the Norwegian Computing Centre for the Humanities and other Norwegian institutions. The data is divided into documents and each word (or a truncated part of it) in each document is potentially a key word. The format of the document is free. By use of prefixes it is possible to divide each document into specific fields of information.
   A request to the system can consist of a Boolean expression of words and prefixes and relational expressions between prefixed words. The system allows for macros which can be permanently stored which makes, for example, synonym lists possible. The system also contains procedures for off-line sorting and printing of catalogues and for the coding of data for statistical analysis by SPSS.
   At the Norwegian Computing Centre for the Humanities, NOVA*STATUS has been, and is still being used in a variety of humanistic archive applications. Several of these have a common data format which consists of 20-30 defined fields of fixed information and one or more fields of free-text description of e.g., photographs, paintings, archaeological and cultural artifacts, old buildings and documents. The second part of the paper will describe the actual use of the system in these various applications.
Note: (abstract only)
Bibliometric analysis of isi's Arts & Humanities Citation index BIBAFull-Text 58
  Morton V. Malin; Martha C. Dean
The most frequently cited journal articles in the Arts & Humanities Citation Index (A&HCI) are analyzed in terms of their disciplinary classification. Four years (1976-79) of the A&HCI data base yielded 144 journal articles which were cited 10 or more times during the period. These articles are predominantly from the disciplines of Language and Linguistics (31%), Philosophy (23%), History (13%), Religion (8%), and Archeology (6%).
   However, most citations in the A&HCI are to books rather than journal articles (96% versus 4% among those items cited 10 or more times). The discipline of Literature (or Literary Criticism) is predominant in the list of highly cited books.
   These statistics suggest systematic variation in the resources used by the different disciplines. Various other aspects of the A&HCI data base are explored, and some specific examples are discussed in depth.
A TEXT-RETRIEVAL SYSTEM USED IN HUMANISTIC ARCHIVE APPLICATIONS BIBAFull-Text 58-59
  Knut Hofland; Sigbjorn Arhus
NOVA*STATUS is a text-retrieval system which is available at all Norwegian universities and several government institutions, running on computers from different manufacturers.
   NOVA*STATUS is a full-text retrieval system originally developed by AERE, Harwell, England, and redeveloped at the Norwegian Computing Centre for the Humanities and other Norwegian institutions. The data is divided into documents and each word (or a truncated part of it) in each document is potentially a key word. The format of the document is free. By use of prefixes it is possible to divide each document into specific fields of information.
   A request to the system can consist of a Boolean expression of words and prefixes and relational expressions between prefixed words. The system allows for macros which can be permanently stored which makes, for example, synonym lists possible. The system also contains procedures for off-line sorting and printing of catalogues and for the coding of data for statistical analysis by SPSS.
   At the Norwegian Computing Centre for the Humanities, NOVA*STATUS has been, and is still being used in a variety of humanistic archive applications. Several of these have a common data format which consists of 20-30 defined fields of fixed information and one or more fields of free-text description of e.g., photographs, paintings, archaeological and cultural artifacts, old buildings and documents. The second part of the paper will describe the actual use of the system in these various applications.
Note: (abstract only)
The text's the thing: Concordances to literary texts BIBAFull-Text 59
  Michael Preston
The history of computer-generated concordances is already one-third of a century long. Thousands of concordances have been generated; many have been published. Most of these are useful, but there are limitations to all of them. In this presentation I discuss a number of variations on concordance-making based on specific projects being carried out at the University of Colorado.
   A word-form concordance can be of considerable utility. Particularly for older states of language of which our knowledge is often less than perfect, this "primary" concordance form seems best for initial circulation, but such a concordance is insensitive to variants and ambiguities. It is often as suggestive of what might have been done as it is directly useful.
   With the increasing availability of microcomputers and various kinds of remote terminals, it is now possible to remove many of the difficulties of text-editing so that a "secondary" concordance edited toward particular applications can be produced more readily. At the University of Colorado, at which the majority of humanists who use computers wish to make maximum use of the available technology without becoming computing scientists, I have found it practical to suggest a particular synthesis of batch and interactive computing. This involves the use of a retrieval, concordance-generating, and editing system so modular in design that editorial intervention is practical at many points. This editing makes use of device-dependent text editors of sufficient sophistication that the user perceives little of the technical operation beyond requesting his programs and his text; otherwise he has the freedom of a typewriter coupled to the benefits of a screen for displaying modifications to his text as they are made, whether directly by him or by a variety of programmed functions. Stations built around "smart" terminals as well as "dumb" terminals with microcomputer and floppy disks are operational.
   Thus it is now more practical to produce second-generation concordances which more nearly reflect the perceived needs of a scholarly community: words may be (manually) disambiguated by meaning and function, contexts may be edited either to omit extraneous material or insert explanatory matter, and words may be clustered by dictionary or thesaurus. The result is concordances of far greater utility in specific areas and more meaningful statistics.
   The development of better equipment and new techniques has made it possible to interact more thoroughly with one's text. There is no need for premature data reduction, but rather the encouragement of what I call the "infinite loop of literary scholarship": one works with one's texts to produce results which suggest work to produce more results which suggest still more work .... The newer technology seems to fit the humanist far better than did the old.
Note: (abstract only)
Sift - searching in free text: A text retrieval system BIBAFull-Text 59
  Oystein Reigem
The SIFT project is aimed at developing an advanced text retrieval system possessing the features of high modularity, high portability, possibilities for integration with word processing systems and a flexible user interface. Possible applications for such a system would be found wherever any sizable collection of information requires efficient retrieval. The SIFT system is mainly designed to solve the problems of searching in free, i.e. unstructured, text but extensive functions for dealing with structured information are also offered.
   The SIFT project is based on former experience in the use of other retrieval systems, particularly the Norwegian version of the British STATUS system, NOVA*STATUS, a system which has found application in various public agencies and at all Norwegian universities.
   The SIFT project was initiated on January 1, 1980, and a prototype version of the system will be implemented on a NORD computer towards the end of 1981. The final product will be made available free of charge.
   This presentation will treat the structure, characteristics and applications of the SIFT system.
Note: (abstract only)
A thesaurus for Canadian iconography BIBAFull-Text 59-60
  Denis Castonguay
The Picture Division of The Public Archives of Canada has undertaken the construction of a thesaurus of iconographic terms as part of its preparations for a computerized inventory system. The thesaurus which complements an existing set of descriptive standards will serve as a terminological control device enabling indexers and researchers to translate natural language into a more restrained and logical system language. General characteristics of the thesaurus will be described. Special emphasis will be given to the impact of on-line information retrieval computer technology on the design and development of the system language. Sample pages of the thesaurus will be available for examination and further discussion. An overview of other Canadian experiments in the field of subject access to visual records will also be provided.
Note: (abstract only)
Folk BIBAFull-Text 60
  Jan Oldervoll
Ethnography is a methodology which emphasises a "soft" interpretative approach to social reality. It is often portrayed as being at the opposite pole to quantitative approaches as exemplified in the classic Merton-Lazarsfeld paradigm (Structural-Functionalism wedded to the survey method).
   Ethnography is a method in which the researcher actively engages in and records the life of a social group. This record of experience is essentially qualitative. It is primarily constructed in the form of textual description: an ongoing account of a person's observations, thoughts and feelings while in the "field". This text is usually given the generic title of field-notes.
   The Ethnographic researcher is therefore normally confronted with a vast amount of textual data. To get some understanding of, and control over this data, the Ethnographer must in some way split up this record of raw experience. He must in some way "chunk" up his data into easily manageable units or categories. It is this classificatory activity which forms the basis of Ethnographic analysis.
   In greater detail, Ethnographic data analysis may be generally portrayed as consisting of three analytically distinct, but empirically indistinct activities: represents a
  • 1. The reading of field-notes, accompanied by the recording of themes and
        hypotheses;
  • 2. The coding of important topics observed within the field-notes under
        different category headings;
  • 3. The disassembling of field-notes by coded category; the purpose being the creative filing and retrieving of one's data.
       The prime concern of this presentation will be to discuss means by which such analysis may be accomplished.
       It is the author's belief that the schema shown below possible evolutionary trend in Ethnographic data analysis: items lower down the schema give the Ethnographer greater power and flexibility in the way he handles text. Reference will be made to presently ongoing research at Cardiff as evidence of this claim.
  • 1. The Traditional Filing Cabinet. a. Simple chronological filing of text. b. Multiple filing: the actual disassembling of text into files.
  • 2. The Filing cabinet and Separate Indices. a. Chronological filing: card indices. b. Chronological filing: specialised indices. c. Chronological filing: computer indices.
  • 3. The Full Computer Approach. a. Indices and fieldnotes stored on UNIX. b. A System of Personalised Interactive Computing for Ethnographers.
        SPICE: a term purely invented to emphasise the "spice" of Ethnographic
        research. Finally, this presentation will also discuss the implications that this research has for textual management in general. The projected computer arrangement will, I believe, prove of advantage not only to the Ethnographer, but to any researcher who employs continuous text as his/her primary resource.
    Note: (abstract only)
  • Thesaurus on American works of art BIBAFull-Text 60
      Eleanor E. Fik
    FOLK is an online analysis and retrieval system developed for the 1801 census of Norway, which is machine readable in a full-text and a coded version, each with approximately 1,000,000 records on individuals. The main advantage of the system is speed. FOLK consists of several parts:- A fast program for statistical analysis of the simple kind. Cross tabulations can be done in 1/16 of the time used by SPSS.- An interface to statistical and graphical packages for more complicated analysis.- A retrieval system for finding subsets of the data base. The subset can be anything from a single person to a region. Information from the coded and the full text version can be used for subtracting individuals.- A recoding system for recoding the coded version using a simple semantic analysis of the full-text version.- A fully computerized record linkage system. Information from other sources can be automatically added to the records on the individuals in the census.
    Note: (abstract only)
    Art and architecture thesaurus BIBAFull-Text 60
      Pat Mohlot
    Designing and providing subject access to works of art has traditionally been very subjective. This is due to the fact that there are no standardized word lists which can be expected to meet the scope of all art collections. Whereas tailoring subject terms to the scope of a given collection is the most practical approach for the curator in charge of the collection, the researcher, who may not be an art specialist, often is frustrated when the listing of subject terms does not include the terms relevant for his/her purposes. This presentation will explore how the development of a thesaurus resolves the conflict of subject vocabulary. Specific examples will be drawn from four computer projects at the National Museum of American Art. Each of the projects varies in scope, yet a single subject classification guide has been developed for purposes of providing subject access to the contents of each project.
       Because a separate subject word list was not originally designed for each project, a thesaurus is now being developed which will allow for a listing of terms not used for indexing but which are relevant to both the scope of each project and anticipated researcher needs.
       From a practical viewpoint, the presentation will demonstrate how the computer can be used to generate terminology to be included in the thesaurus.
    Note: (abstract only)
    The role of the computer in ethnographic analysis BIBAFull-Text 60
      Paul Beynon Davies
    Art and architecture literature presents indexing difficulties due to the absence of a recognized controlled vocabulary. A recent investigation showed a number of independent partial efforts targeted to local needs. The Art and Architecture Thesaurus (AAT) group is building on the experience of others to create a unified, hierarchical thesaurus for these fields. Although the thesaurus itself will be in machine "readable form, the real value of automation will be the ability to search hierarchically the literature indexed with the AAT.
    Note: (abstract only)
    New file management in P-STAT BIBAFull-Text 61
      Roald Buhler
    In January of 1981, the Center for the Study of Youth Development initiated an on-line catalog of the holdings of its specialized library consisting of 10,000 monographs, journals, vertical file materials, etc. The present paper discusses the reactions of the end-user or patron population to the resource. The background of the library automation project -- including issues of cost-effectiveness, increased power, and user utility--is discussed in order to establish the initial goals of this activity. Then, attention is given to how the project was implemented; this includes a comparison of preliminary goals with what ultimately was delivered. The transition from a COM catalog to the on-line catalog required training of patrons (some of whom had little or no experience with a computer terminal), and only half of the Center staff participated in the initial training sessions. Preliminary patron behavior is reviewed, and an attempt to informally analyze both positive and negative experiences is offered. The initial experiences are summarized in a discussion of the problems and prospects of the user interface of the "query" portion of the on-line catalog software.
    Note: (abstract only)
    Initial experiences with an on-line catalog BIBAFull-Text 61
      Rebecca D. Dixon; Edmund D., Jr. Meyers
    P-STAT began as a collection of integrated commands which read rectangular files sequentially. A simple modification language for recoding and case selection was added in the late 1960's. Commands were added throughout the 1970's. Recently, however, a major effort has gone into language enhancement and file structure improvements.
       P-RADE, a random access data enhancement to P-STAT, is an example. This type of file structure supports up to 10 indexing keys, allowing a case or a group of cases to be accessed very rapidly. In addition, any P-STAT command can read a P-RADE file sequentially in any key order.
       In some ways, this approach blends aspects of database technology into statistical software. Examples of its use will be given.
    Note: (abstract only)
    Urben research in ethnic, demographic and household-economic structures with small area, micro-databases BIBAFull-Text 62
      Harold Benenson; Steven Just
    A data management system for art museums is presented. In addition to providing conventional cataloging functions such as searching, sorting and indexing, the system is shown to be able to model complex relationships between entities relevant to the application. The importance of this capability with regard to representing higher levels of information (beyond pure physical characteristics) is pointed out. Alternative representations of such relationships are discussed and some directions of further work in the area of automation of a museum's catalog is cited.
    Note: (abstract only)
    Beyond cataloging functions for art museum data banks BIBAFull-Text 62
      Andrew B. Whinston
    Computerized U.S. Census data has been most widely used for (1) employment, fertility, demographic and stratification research involving Public Use Sample (PUS) microdata on the national level, and (2) applied research (for planning, administration, marketing, and other applications) with summary (aggregated) data for localized (i.e., block, tract, community, etc.) geographic units. A third, highly productive avenue of research, involving Census PUS micro-data for localized urban units (i.e., SMSAs, counties and especially selected large-city neighborhoods), has not received the attention it merits, either among sophisticated public data users or among novice users.
       Three forms of current or future small area, Census microdata constitute resources for urban research. First, conventional 1970 Census PUS data sets are available for counties and/or SMSAs (with minimum populations of 250,000). Second, special tabulations for the two largest U.S. cities permit analysis of 1970 household and person records group by (sub-county) urban neighborhoods (27 in New York City; 12 in Chicago). Third, 1980 Census microdata, by allowing identification of geographic areas of smaller population size (100,000 population), will vastly expand the applications of localized research with the conventional PUS or special tabulations. In addition, the 1980 PUS microdata will, for the first time, allow comparative time-series analyses of county (or SMSA) area populations, over the 1970-1980 decade.
       In contrast to national PUS microdata research, local level analyses have the advantages of (1) smaller data set size and processing costs, (2) more immediate integration of computerized research hypotheses with additional sources of (qualitative) information and questions (stemming from direct knowledge of the communities studied), and (3) increased ability to zero in on specialized ethnic, occupational-industrial, migrant, age, etc. urban population groups which are disproportionately represented in particular local environments. Our own research projects (at various stages of development) which attempt to exploit these advantages include computerized analysis of:
  • 1. Patterns of household composition, and source and structure of family
        income, among Upper East Side and Upper West Side Manhattan residents with
        family incomes of $50,000. or more (as reported in the 1970 Census)
  • 2. Employment patterns of married women of Cuban immigrant background, in
        relation to family class position and period of immigration, for Hudson
        County, New Jersey
  • 3. Contrasts in the occupational positions and household patterns of
        first-generation and second-generation husbands and wives of Italian
        background in a New York City working class community (Astoria-Long Island
        City, Queens)
  • 4. Wives' employment patterns in relation to ethnic background and husbands'
        occupations and income levels in a working class community located in a
        manufacturing center (South Side, Chicago)
  • 5. Change in the social and demographic characteristics of succeeding groups of
        migrants to an expanding "sunbelt" metropolitan area (Albuquerque, New
        Mexico)
  • 6. Contrasts in local housing markets and housing availability, involving
        analysis of the number and characteristics of vacant housing units for New
        Jersey counties.
       These, as well as other projects we have assisted, have been undertaken with varied software resources, including packages (such as CENTS-AID) with unique hierarchical file processing capabilities, as well as more versatile (non-hierarchical), general purpose packages (such as SPSS). The advantages and research applications of small area, micro-databases can be realized with a range of software techniques and user-formulated research strategies.
    Note: (abstract only)
  • Making computer capabilities accessible to musicians BIBAFull-Text 63
      Ann K. Blombach
    During the past ten years at The Ohio State University we have developed a large library of music-related computer programs, encompassing many aspects of music scholarship. These programs include procedures for performing basic music analysis functions as well as programs for information retrieval. We have also prepared a considerable library of encoded musical data and bibliographic information. All our programs and data are stored on disks on OSU's Amdahl 470, making them immediately accessible to any computer programmer who has been initiated into the mysteries of manipulating disk-stored data sets. We wanted, however, to make everything equally accessible to those musicians who are not particularly interested in mastering the complexities of computers, but who would nevertheless like to make use of computer-produced results. We have taken two significant steps toward solving this difficult problem:
  • 1. SLAM (Simple Language for Analyzing Music), a "super-high-level" language
        written in SPITBOL by Thomas G. Whitney (formerly on the staff of OSU's
        Instruction and Research Computer Center). With SLAM, the musician
        specifies which music analysis procedures and which musical data he would
        like to use, communicating with the computer in normal English using
        traditional music analysis terminology. Though he must satisfy certain
        syntactical requirements and must include certain key words, such
        constraints are minimal. For example, "Please count the intervals in the
        alto voice of Bach's chorale 308." and "Count intervals alto 308." are both
        legal SLAM commands which would produce the same results. SLAM translates
        the user's request into the appropriate job control language statements
        which call the programs and data necessary to perform the task.
  • 2. IRRS (Information Retrieval Request System), written in SPITBOL, currently
        under development. This system provides access to different types of
        bibliographic, textual, and descriptive data stored on the computer. The
        user makes his request in the appropriate format, and the computer executes
        the steps necessary to produce the requested information. For example, in
        order to retrieve a bibliography of books and articles written between 1960
        and 1970, dealing with the perception of music intervals, the user enters
        "keyterm: perception, music intervals" and "year: 1960-1970" from a
        computer terminal. The terminal then prints a list of books and articles
        meeting these criteria. SLAM has been very successful. Not only have the non-computer-programmers found SLAM invaluable, but even our musician-programmers have found it much easier to access existing programs through SLAM. The information retrieval programs are well under way, and we expect them to be equally useful in providing access to research materials. In short, we are achieving our goals: 1) to make available a variety of computer procedures and data to computer-shy musicians and 2) to eliminate the fears, disappointments, and general confusion too often associated with musicians' attempts to use computers.
    Note: (abstract only)
  • Requirements for improving the use of computers to support the development of policy decisions BIBAFull-Text 63
      John Henize
    Computer based information systems have been developed and used successfully for production and engineering and for lower level management tasks but they have yet to be widely applied to aiding management decision making at the higher policy making levels. Despite many attempts, the failures have been many and the successes few. This has resulted in large part from the fact that the technicians who have been engaged to design such systems have not correctly understood the nature of the problem environment with which they are dealing. Because they themselves have had no experience at the policy making levels, they have had a poor conception of the problems to be solved, and thus have made mistakes which they would not have made, had they been designing an information system for lower level tasks. In designing an inventory or process control system, for instance, the technicians have carefully studied the nature of the problems to be dealt with, and have decided which information is important and which not. They have not, in these cases, delivered reams of superfluous information to every point in the system. But when designing an information system to aid higher level decision making, they have tended to do the exact opposite. They have attempted to put every conceivable piece of information that could possibly be of the most remote interest at the fingertips of each and every policy maker -- each of whom is already suffering from a severe information overload. The decision maker could never possibly begin to digest all of this information, even if he found it useful, which, in general, he does not. This paper proposes methods for dealing with this crucial inhibiting problem.
    Note: (abstract only)
    On the need for human rationing BIBAFull-Text 63-64
      Francis M. Sim; Glen D. Kreider; Roscoe T. Miller
    The occasion for this discussion is our recent experience with a severe shortfall in computational capacity at The Pennsylvania State University. Although the details of this affliction may not be reproduced elsewhere, it is our opinion that the events we experienced stem from essential, underlying phenomena which do have wide currency. These are, first, that overall demand for computational facilities and services is increasing "exponentially" and shows no sign of slowdown, and, second, that resources (most especially including funds) to provide increases in the relevant supply of computing capacities are not keeping pace and can not be expected to do so.
       It is possible that technical advances can treat this disorder, but in the nature of the political/bureaucratic systems which are the vehicles for the delivery of such "fixes", acquiring them will not be painless. Concretely, it seems unlikely that faculty and students in colleges and universities can expect relief from recurrent boom-and-bust in computational resources, whether the duration of such cycles is measured in decades or days. It behooves us to ask whether the attendant pains must be endured, and whether they are conducive to easier and more productive use of computing systems. Our answers are, first, that such pain does not ennoble, and, second, that it often is counterproductive. Consequently, we must try to identify the proximate sources of the disrupting effects of these cyclic shortfalls and attempt to curb them, within our means.
       We propose that the appropriate guidelines for allocating scarce computing resources may be characterized as prescriptions for humane rationing. In the most general terms, these prescriptions are 1) that qualified users should be ensured a fair share of the available resource without unnecessary expenses of effort in competition for them and in queuing, and 2) that use of computing resources should be so governed as to insure that all user sessions are as free as possible of delays, encumbrances, and constraints induced by management practices rather than by inherent limits of hard and software.
       While rationing is unnecessary during the occasional boom in academic computing resources, we should have on the shelf the management tools which can make fair and effective allocation possible during the recurrent busts we may anticipate in the 1980's.
    Note: (abstract only)
    Relational data base management systems: A tale of two systems BIBAFull-Text 64
      Malcolm S. Cohen
    This project management system is designed to provide research administrators and department executives the capability to handle the financial accounting needs for 30 or more separately budgeted projects/departments on a computer with 64K of RAM and two floppy disks. The system generates a wide variety of flexible reports including: a Cost Accounting Summary showing previous period expenses, current period expenses, total expenses, budgeted amount, encumbrances, and remaining balance by line item; a Budgetary Summary which shows for the current period and year-to-date the actual expenses, the budgeted amount, the variance between actual and budget, and the percent of budget by line item expended; an Income Statement showing revenue and expenses; an Expense Report; and a Transaction Register; and more.
    Note: (abstract only)
    A micro-based project management system BIBAFull-Text 64
      William R. Dahms
    A comparison is made of systems the author has designed for a main frame and a microcomputer. The limitations and advantages of microcomputers for data base management are discussed. Example applications are presented. Advantages of the set theoretic approach are discussed. Applications most suitable for the relational model are described, and contrasted on both the large and small system.
       The systems discussed include a commercial system Condor Series 20 DBMS which runs on CP/M, on Z-80 microcomputers and MICRO, a system which runs on MTS on large virtual memory main frame computers. CP/M is the operating system of Digital Research. MTS is the operating system at the University of Michigan.
    Note: (abstract only)
    Combining database management and statistical subroutines into a user-oriented data analysis facility BIBAFull-Text 64-65
      Marek Rusinkiewizc
    Most database management systems (DBMS) offer convenient and flexible data structures and very good data maintenance facilities. At the same time their data manipulation languages are usually limited and most data analysis applications require extensive programming in a host language. On the other hand, the packages of statistical subroutines (PSS) usually have very good data manipulation and analysis facilities, while at the same time they lack the well known advantages of DBMS. An attempt was made to combine the data definition and maintenance facilities of DBMS and the data manipulation and analysis facilities of PSS into a single user-oriented system. The additional software developed for this purpose performs the following functions:
  • 1. Allows the user to define his own analysis and (optionally) store it in a
        library for further reference.
  • 2. Allows the user to define the data on which analysis is to be performed.
  • 3. Allows the user to execute the (predefined) analysis in the following way:
  • a. the user's description of analysis is translated into a sequence of data
        analysis and/or data manipulation subroutines of PSS.
  • b. the required data are retrieved from a database under the control of DBMS
        and put into a temporary file, whose structure is determined by the
        analysis input requirements.
  • c. the analysis is performed under the control of the executing program PSS. The outlined system is now being implemented, as an interdepartmental effort, in The Institute for Organization of the Medicine Industry, Warsaw, Poland. It utilizes IMS/VS as a database management system and the OSIRIS III package for statistical data processing. Although the system is still under development it is used not only by research workers but also by administrators and management for relatively simple analysis which are not routinely performed, standard reports. The main advantages of the outlined approach can be summarized as follows:
  • a. reduction of time and cost of preparing the analysis,
  • b. increasing the reliability
  • c. presenting the tool for the actual decision makers to perform their own data
        analysis without the interference with programmers.
    Note: (abstract only)
  • Introductory sociology with the general social survey BIBAFull-Text 65
      David L. Ellison
    The purpose of this presentation is to describe an alternative sociology course that links student computer skills with available social survey data. Students are given access to a file of SPSS programs which they can easily modify to fit their own purposes. Using the General Social Survey they can test hypotheses on current data reflecting their interests. The broad range of data allows beginning students with little or no previous computer experience to investigate a wide variety of topics.
    Note: (abstract only)
    Operating systems, editors and application packages: Conceptual and terminological problems facing new users of BMDP and SPSS BIBAFull-Text 65
      William Bezdek
    The purpose of this presentation is to describe an alternative sociology course that links student computer skills with available social survey data. Students are given access to a file of SPSS programs which they can easily modify to fit their own purposes. Using the General Social Survey they can test hypotheses on current data reflecting their interests. The broad range of data allows beginning students with little or no previous computer experience to investigate a wide variety of topics.
    Note: (abstract only)
    Public use of an economic data base system. BIBAFull-Text 66
      Charles G. Renfro
    The subject of this paper is the design of an economic data base system for public use, taking as a case study the Kentucky Economic Information System. This system offers its users facilities ranging from simple data retrieval and display to the capability to construct, maintain, and use econometric models online. It was designed originally to be user friendly to the trained econometrician, offering a semi-natural, verbal-mathematical free-format command language as the basic communication mechanism. However, with the development of the KEIS into a data base system that is widely used by government officials, academics, and others throughout Kentucky, the need has developed to provide a facility that is user friendly to any possible user. This paper considers the issue of user friendliness as a variable, depending upon the category of user. But it also considers the role played by the computer network and its operating conventions as a determinant of the user friendly features that are required. For example, in order to make the KEIS useable by reference librarians in universities, it was necessary to design a special interface; this necessity relates to the operating policies of the Kentucky Educational Computing Network, one of the computer networks the KEIS is resident upon. In addition, this paper considers the issue of user friendliness as it arises due to the specific characteristics of data and the operations performed: an economic data base system is inherently more difficult to operate than, say, a bibliographic data base system. Various other aspects of the KEIS have been considered in articles and papers appearing in such journals as the Journal of the American Society for Information Science and the Review of Public Data Use and in the proceedings of such conferences as the 1981 National Online Conference (March 1981) and the 8th European Urban Data Management Symposium, Oslo, Norway (June, 1981). A paper on the econometric modeling language (MODLER) that is available as part of the KEIS will be given at the 1981 Economic Control and Dynamics Conference, Copenhagen, Denmark (June 1981). This paper complements these other articles and papers.
    Note: (Abstract Only)

    CHI 1981-05-20 Volume 2

    Recent advances in user assistance BIBAFull-Text 1-5
      N. Relles; N. K. Sondheimer; G. P. Ingargiola
    As interactive users find conventional methods of training and documentation inadequate, designers are providing systems with online reference information, descriptions of valid input, elaboration of error messages, and explanations of a system's behavior. This paper describes some existing commercial systems that offer online assistance and more experimental approaches by the research community. The following material was originally presented at the SIGSOC conference on Easier and More Productive Use of Computing Systems. An extended version will appear in a special issue of the IEEE Transactions on Systems, Man, and Cybernetics (Volume SMC-12, March/April, 1982), and is reprinted here with the permission of the IEEE.
       Online user assistance is now offered on commercial systems and is the subject of investigation in experimental settings. It is difficult to compare the advantages and limitations of different approaches because they vary along many dimensions and because there is no commonly accepted terminology. A grouping of these dimensions into major categories is a necessary first step towards more empirical evaluations. The major software-related features of online assistance appear to fall into four categories:
  • access method -- the way users can construct or enter requests for
       assistance;
  • data structure -- the manner in which different portions of assistance
       information are related to each other;
  • software architecture -- how assistance requests and their responses are
       communicated among a user, an operating system, application programs, and
       the assistance database; and
  • contextual knowledge -- how much information is retained about the assistance
       environment, including the user, the application, and the tasks being
       performed.
  • Automatic construction of explanation networks for a cooperative user interface BIBAFull-Text 6-14
      Philip J. Hayes; Ingrid D. Glasner
    This paper is concerned with providing automatically generated on-line explanations to the user of a functional computer subsystem or tool about what the tool can and cannot do, what parameters and options are available or required with a given command, etc.. The explanations are given through the COUSIN interface system which provides a cooperative tool-independent user interface for tools whose objects, operations, input syntax, display formats, etc. are declaratively represented in a tool description data base. The explanations are produced automatically from this data base, with no incremental effort on the part of the tool designer, and in a single uniform style for any tool that uses COUSIN as its interface. The explanation facility takes the form of a fine-grained, tightly linked network of text frames supported by the ZOG menu-selection system. Exactly what information the net building program, NB, extracts from a tool description, and the way in which this information is formatted in the text frames is controlled by a second declarative data base called the aspect description. The declarative nature of the aspect description makes it easy to adapt NB to changes in and extensions to the tool description formalism, and to experiment with the structure of the explanation network. We also describe how the appropriate network frame can be found and displayed in response to specific explanation requests from the user.
    Using offline documentation online BIBAFull-Text 15-20
      Lynne A. Price
    Current interactive programs usually provide some form of online documentation in addition to the traditional hard-copy user's manual. To save the expense of writing two documents covering the same material, it is not uncommon to find offline manuals that are available interactively as well as printed versions of material originally organized for online use. Because of the difficulties inherent in using the same material in different ways, neither approach is totally satisfactory. The THUMB system minimizes these problems by structuring offline documentation for interactive use. An expert on a particular text (e.g., its author) prepares a detailed representation of the organization of material within the document. Once this data structure (which resembles a thorough table of contents and heavily cross-referenced index) is available, users access information free from the strictures of linear text, simple indices, and page numbers. The expert's task is nontrivial, but it requires less effort than writing a new document. Creation and revision of text are made easy by supportive utilities. THUMB monitors reader's requests in order to provide experts with feedback about a document's use. Readers need not be aware of THUMB's underlying data structure or the tools available for experts.
    How shall we evaluate prototype natural language processors? BIBAFull-Text 21-26
      Bruce W. Ballard
    Recent years have seen important advances in computational linguistics and artificial intelligence. Although many problems remain, the goal of providing limited English-processing facilities for non-technical computer users is within sight. By the end of the decade, numerous systems providing limited coverage of "natural language" will be available for business and home use. Several systems (e.g. TQA [16]) have already become operational. One system (ROBOT [7]) has been supporting natural language inputs in a dozen or so different commercial database applications for at least three years. Many other systems have been developed to the prototype stage and will soon be able to be transferred, with varying degrees of effort, from a research to a production environment. Each system tends to provide special features of its own, and the future prospects for database, office, instructional, and other environments are quite exciting.
    Redesign of the user interface involving users of a large operational real-time system BIBAFull-Text 27-30
      Thomas H. Martin
    Today many large systems exist which have had many designers, have been patched up over the years, were designed for a different type of user than current users, and were once (but no longer) state of the art. The Deep Space Network at Jet Propulsion Laboratory is such a system. In Australia, Spain, and California, operators of the system use inflexible, incompatible routines to route data to Pasadena. Worker motivation and accuracy have to remain high for the system to work. In an attempt to develop redesign guidelines, users were queried regarding their attitudes and difficulties with the system. Interface alternatives were isolated and incorporated into a prototype for assessing the impact of the alternatives on user behavior. The resulting guidelines form a user-oriented, experience-based basis for continuing system evolution.
    Evaluating the "friendliness" of a timesharing system BIBAFull-Text 31-34
      Lorraine Borman; Rosemary Karr
    The decade of the Sixties served to introduce most university campuses to the computer; the Seventies brought the computer, via a terminal, into every facet of university life. Computing in the Eighties will cause every university and college to evaluate and reconsider its exploitation of modern computing equipment for education and research.
       For example, at Northwestern University, it was recognized that continued growth in timesharing would be a major factor in computing at NU in the 1980s and that this growth would come from a large community of new users and of casual users. In January 1980, the Computing Center began a long-range planning study. A five-year equipment enhancement and replacement plan was to be developed which was intended to reverse an unsatisfactory trend toward computer saturation, to further improve and modernize our computer offerings, and to ensure that NU remained on a path of excellence in computing. Since time-sharing had already increased to over 50% of the total usage of the computer, a decision was made to begin the evaluation of modern timesharing systems, with special emphasis in two areas: 1) efficiency and reliability, and 2) the user interface.
       This paper describes the processes which were developed and used for the evaluation of the user interface, or as it came to be known, the "friendliness" study [1].
    Evolution of a query translation system BIBAFull-Text 35-41
      Jyh-Sheng Ke; Shi-kuo Chang
    This paper presents the motivation, history, and idiosyncrasy of a query translation system. Detail of the translation process has also been described.
    The need for quantitative measurement of on-line user behavior BIBAFull-Text 42-45
      W. David Penniman
    An argument is made for the systematic collection and analysis of data regarding user-computer interaction in an on-line setting. A suggested approach involving preliminary data collection/analysis, development of a conceptual framework or model, and validation of the model is described. The case for this approach is supported by presentation of some preliminary results from a study of monitor data collected from the National Library of Medicine's ELHILL transaction file. Follow-on steps are proposed including comparison of research results to other studies of the same system or studies using similar techniques.
    A statistical user interface for the Relational Model of data BIBAFull-Text 46-52
      Robert F. Teitel
    In the decade since the introduction of the Relational Model as a user view of large stored data bases, a variety of user languages have been proposed and a number of experimental systems have been implemented. The current computer science literature is replete with papers on the theoretical and practical aspects of the Relational Model and its implementation, as are most recent texts on data management systems.
       Implicit in the design of the user languages of most database systems, including those based on the Relational Model, are assumptions regarding the patterns of access to and the usage of the content of the database. Somewhat oversimplified, the assumed pattern of access is to search for a particular occurrence (case, observation) in the database which satisfies a given condition, and then to display the values of all attributes (fields, variables) of that one occurrence. The languages are designed to permit users to pose queries such as, for example, "what widgets do we buy from ABC industries?" or "display Jones' employment history". Queries of this type are termed informational queries; and systems supporting such queries with appropriate user languages and internal data storage techniques and access methods are information systems.
       A statistical query, similarly oversimplified, specifies a pattern of access to most, if not all, of the occurrences in a database, and a usage pattern of at most a few of the attributes. Examples of statistical queries are "what is the average size of our purchase orders?" and "display the number of employees by race, sex, and job category". Current statistical systems have limited capability for performing analysis over large and complex data collections, and their user languages reflects this limitation. A statistical query, as defined here, need not involve sophisticated mathematical analysis; the distinction between informational and statistical is derived from the antithetical patterns of access to and usage of the data content of a database. Most work on access languages for relationally based data systems has been on information query languages; very little work has been done on statistical query languages.
       This paper, then, discusses some elements of a language for statistical queries for a data system employing the Relational Model as the user view of large stored data bases.
    Keyboard entry - can it be simplified? BIBAFull-Text 53-58
      Richard I. Land
    The present keyboard arrangement cannot be defended as comfortable, logically arranged, or optimized for human efficiency. Information theory based experiments suggest measures for alternative arrangements. Character sets used in different tasks can be expected to yield different optimal key locations. New tasks are introducing new characters and changing the frequency of selected old ones. Numerous alternative arrangements for alphanumeric fingered entry have been designed, but none are supported by conclusive testing. The amateur keyboard user far outnumbers the professional. Computer entry and word-processors are overtaking the simple typewriter as common alphanumeric stroke entry devices. Compromise and selection of a simplified keyboard that is compatible with present mechanical and electronic designs is advocated.
    Adaptable user interfaces for portable, interactive computing software systems BIBAFull-Text 59-64
      R. Evans; N. J. Fiddian; W. A. Gray
    In the context of this paper a computing software system consists of a database, an associated user interface which allows users to analyse the data and the routines or programs which implement the analytic functions available through the user interface. It is assumed that the complete system - source code and data - already exists in a form which is as easily portable as possible between different computer environments. For such systems adaptability is the problem of adjusting the user interface and analytic capabilities to suit different user communities when such a system is transferred from one environment to another. This may include adaptation to specific hardware facilities as well as user requirements.
       In 1977/8 the International Planned Parenthood Federation (IPPF) funded a project at University College Cardiff to implement a portable computing software system originating from the Population Dynamics Group (PDG) at the University of Illinois. This system allowed users to perform population projections under different demographic conditions showing in a graphical presentation how the population of a country varies over selected time spans. The database consisted of population statistics for a number of countries. When implemented at Cardiff it was intended that this system should be used as a demographic training aid by the post graduate diploma students in the David Owen Centre for Population Growth Studies. These students are an international group who are specialists in the field of demography but have little or no computing background.
       This paper will discuss briefly how this portable system was implemented on a PDP 11 minicomputer at Cardiff and then give a fuller description of the adaptation of the user interface and analytic capabilities to the local community and its computer facilities. General conclusions will be drawn as to how such systems should be written so as to ease the problems of adaptability.
    User consulting in three forms of network-based organization BIBAFull-Text 65-68
      Richard C. Roistacher
    The utility of computer networking to organizal tasks is discussed. Three forms of network organization are described, and some examples given. Problems of user consulting in each form of organization are discussed.
    Lexicon design using perfect hash functions BIBAFull-Text 69-78
      Nick Cercone; Max Krause; John Boates
    The research reported in this paper derives from the recent algorithm of Cichelli (1980) for computing machine-independent, minimal perfect hash functions of the form: hash value: hash key length + associated value of the key's first letter + associated value of the key's last letterA minimal perfect hash function is one which provides single probe retrieval from a minimally-sized table of hash identifiers [keys]. Cichelli's hash function is machine-independent because the character code used by a particular machine never enters into the hash calculation.
       Cichelli's algorithm uses a simple backtracking process to find an assignment of non-negative integers to letters which results in a perfect minimal hash function. Cichelli employs a twofold ordering strategy which rearranges the static set of keys in such a way that hash value collisions will occur and be resolved as early as possible during the backtracking process. This double ordering provides a necessary reduction in the size of the potentially large search space, thus considerably speeding the computation of associated values.
       In spite of Cichelli's ordering strategies, his method is found to require excessive computation to find hash functions for sets of keys with more than about 40 members. Cichelli's method is also limited since two keys with the same first and last letters and the same length are not permitted.
       Alternative algorithms and their implementations will be discussed in the next section; these algorithms overcome some of the difficulties encountered when using Cichelli's original algorithm. Some experimental results are presented, followed by a discussion of the application of perfect hash functions to the problem of natural language lexicon design.
    Designing SENSE (a software environment for social science rEsearch): The role of software tools BIBAFull-Text 79-85
      N. J. Fiddian; W. A. Gray; M. W. Read
    In most general purpose computer systems there is a wide variety of software available to users. Such software is usually provided in one of three organisational forms - routines in a library; collections of related functions grouped in a package with a common interface; independent programs called through operating system commands. This interdependent tripartite structure creates problems for non-sophisticated users as it involves different levels of user interface complexity.
       At the routine level a user must write programs in an appropriate host programming language to use the software. If he wishes to use a selection of routines written in incompatible languages then he may have to familiarise himself with more than one host language. In each language he must be aware of the calling conventions for routines, the possible representations of various types of data, the methods of passing parameters and the ways of inputting and outputting data to and from the external environment. This type of interface occurs with libraries like NAG and IMSL.
       In the case of packages the imperative user interface is usually somewhat simpler, consisting essentially of a name identifying the function required and some associated parameters which identify variables, labels, files, options, control and code values, etc as appropriate. However, function calls of this form must normally be preceded by a non-trivial amount of declarative and other "red tape" information expressed in the package interface language. Also, package environments can be restrictive in that the user is constrained to the types of data structure and analysis supported by the chosen package unless he is prepared to write programs to transform his data for other packages or to analyse it independently. SPSS is typical of this kind of package.
       When software facilities are provided at the program level, the user interface often consists simply of one-line program invocation commands written in the local operating system's command language, with program options and data files identified by command parameters. Common examples of such facilities are sort and archiving programs. A program level interface becomes even simpler, and at the same time more powerful, if command sequences can be formed into parameterised command procedures and if programs are enabled to communicate directly with one another without the need for explicit intermediate files.
       In the latter type of environment the application software user generally finds that there are analytic program tools available to meet only some of his requirements. Consequently he has to embrace either or both of the other levels in addition in order to increase the analytic power available to him. Transfer between levels is not easily accomplished in most systems as facilities do not normally exist to help the user move data between levels. This difficulty comes on top of the obvious problem of having to master more than one interface and more than one level of complexity.
       In the SENSE project (11), which is funded by the U.K. Social Science Research Council, we are creating a prototype computing environment for social science researchers which can accommodate non-sophisticated users. The aim is to provide an integrated environment where such users will have a complete range of application software available (packages, routines and programs) through a single, simple user interface. We believe that this can be achieved by exploiting and extending the concept of software tools propounded by Kernighan and Plauger (19), so that as far as possible all software can be used through a program level interface, with its attendant advantages. Following Kernighan and Plauger we believe that software tools "can be used to create a comfortable and effective interface to existing programs", as well as providing an ideal model for the structuring of brand new application software. This paper will consider various aspects of the initial design of the SENSE software environment with particular reference to the importance of software tools in that design.
    Building and accessing an REL database BIBAFull-Text 86-90
      Steve D. Gadol; Egon E. Loebner
    This paper discusses the construction of an experimental database at Hewlett-Packard Laboratories using the REL ENGLISH software provided by Frederick and Bozena Thompson of the California Institute of Technology. Of special interest is the quasi-natural interface and its ability to tolerate ambiguities. This provides a support mechanism for multiple user views of the same data in which disambiguation is accomplished during semantic processing.
    Cognitive style, categorization, and vocational effects on performance of REL database users BIBAFull-Text 91-97
      Diana Gail Egly; Keith T. Wescourt
    Twelve subjects from two job categories, sales engineers and programmer analysts, used an REL ENGLISH database to answer a set of questions. These questions were designed to require successively more complex interactions. The database contained Hewlett-Packard's Condensed Order Records, which were pertinent to the jobs of the sales engineers.
       All of the subjects were given a battery of cognitive tests measuring cognitive style and pattern extrapolation skills prior to using the database. They also received a brief training session on the structure of the database.
       Analysis of the subjects interactions with the REL ENGLISH database, particularly analysis of the errors made, showed: first, that cognitive style is significantly correlated with the number of questions successfully completed; second, that while sales engineers were able to access all levels of the hierarchy in the database, programmer analysts had significantly more difficulty accessing data from higher levels than they did with data from the same or lower levels than the standard, entry level; and third, that programmer analysts had less difficulty with the fixed-format, programming-language-like features of REL ENGLISH, while sales engineers has less difficulty with the free-format, English-like features of REL ENGLISH.
       These findings suggest that quasi-natural language database interfaces are appropriate for nonprogrammers who have a field-independent cognitive style and who already are domain experts in the area covered by the database.
    An integral approach to user assistance BIBAFull-Text 98-104
      Robert S. Fenchel
    User assistance is incorporated into some of today's interactive computing systems. The assistance is rarely consistent in its accuracy, availability, accessibility or style. In this paper we discuss general requirements for assistance systems and a characterization of different types of assistance which may be provided users. A technique for integrating the design of an assistance system with the design of an interactive computing system is described. The technique satisfies the expressed requirements and greatly facilitates the development of assistance systems. Finally, a brief discussion of techniques for evaluating the quality and effectiveness of an interactive assistance system is presented.
    Short-term friendly and long-term hostile? BIBAFull-Text 105-110
      John C. Klensin
    Several authors have suggested, and we are hearing some additional papers on the subject at this conference, that our computer systems should be "friendly" -- that the new user, or the infrequent user, should be able to use them quickly, without any special learning, and without any resort to written materials. My colleagues and I are responsible for a large analysis system [1, 3] that has been in active use outside its development group for about five years and which has several philosophically similar predecessor systems that go back another three or four years [4, 5, 6]. It is interactive in the sense that one of its reasons for existence is to permit the user to interact with data and tease results out of them in a variety of ways -- it has never been, nor is it derived from, a front-end to a batch system or batch thinking. Its users have ranged in skill and background from the beginning student to the professional statistician developing new techniques; from the academic researcher to the clerk in commercial environments. We draw, from this experience, some differing views on what kinds of system designs are friendly and what sorts of assumptions lead to "friendly" systems.
    The mini-micro connection BIBAFull-Text 111-112
      G. R. Boynton
    The office of the future is defined. It is a work station dominated by a micro computer which is in communication with more powerful computers, large disks, printers, and other equipment which can be shared. The large computers, disks, printers, and all the rest already exist. Micro Computers or desk top computers already exist. There are only two steps left in realizing the office of the future. One step involves electronics; establishing high speed communication between the desk top computer and all of the other equipment. The second involves programming; defining and developing coherent software systems. This paper is about the way in which these two problems were handled by the department of political science at the university of Iowa.
    A program for social science computer literacy BIBAFull-Text 113-115
      Paul J. Strand
    A strategy for organizing the social science computer user community is presented. The strategy recognizes that social scientists have exceptional educational needs and unfavorable budgetary constraints. A series of workshops is proposed to reduce curriculum redundancy and avoid the costly "on demand" mode of consultation that has developed in most computer centers. An example of a workshop is provided.
    Interfacing to text using HELPME BIBAFull-Text 116-124
      Thomas P. Kehler; Mike Barnes
    HELPME is a Lisp based system designed to provide on-line help for novice and expert users of computer systems. HELPME permits the implementation of easy to use interfaces to existing documents by allowing a user familiar with a document (a 'document expert') to produce an index and incorporate information relating to the structure of the document into the interface. A typical user of HELPME can then interact with the document and index through a series of commands to quickly find the information desired.
       The primary advantage of a system like HELPME is that it permits construction of interfaces to existing on-line documents and provides three modes of interaction with the documents: simple display, indexed-based query and context overview. Simple display permits forward and reverse movement through a document while index-based query uses key-words to select relevant sections of the document hierarchy for display. Context overview permits a hierarchical view of the document. For example, the table of contents of a document can be used to construct this hierarchy. Each of these modes of interaction are independent and may be selected by the user at any point. The goal of HELPME is to allow a user to find any information in a document relating to the user's requests. Of course, many users do not have a good grasp on exactly what they are looking for but rely on inadvertent discovery. It is hoped that the flexibility of a HELPME-like system will satisfy the goals of an easy-to-use, extensible help system for computing environments. A long term goal for HELPME is to use domain knowledge and user models in user assistance and information management.
    Human diversity and the choice of interface: A design challenge BIBAFull-Text 125-130
      Starr Roxanne Hiltz; Murray Turoff
    As part of a field trial, the Electronic Information Exchange System (EIES) provided a variety of interfaces and user aids. Users were permitted to freely choose from the available variety at any time. They were then asked to report on their frequency of use of the various alternatives at two points in time. We found that there is no one style of interface or source of user support which will satisfy all users at any point in time, or even the same user as experience and familiarity with the system change. While their generalizability is unknown, our observations suggest that human helpers (user consultants on EIES) are the single most valued source of user support, and that system designers should consider incorporating an integrated and somewhat redundant system of both menus and commands into the interface.
    Living taxonomies in the corporate world: The need for multinested data models BIBAFull-Text 131-136
      Egon E. Loebner; Steven D. Gadol
    The complexity of information processing, disseminating and controlling is very high within a sizable corporation. Database design, targeted to carry out these functions, is constantly improving. Nevertheless, nonprogrammers have trouble accessing most databases. Layers of EDP personnel, as well as unresponsive and cumbersome systems, are an obstacle to effective database use.
       In this paper, we describe an exploratory investigation of databases based on REL ENGLISH [1]. In our system, data structures and access language were designed to map closely the corporate structure and its terminology. Our test vehicle was a portion of Hewlett-Packard's internal information network, the corporate Order Processing System. We have identified about a dozen job related perspectives belonging to geographically dispersed and functionally stratified end-users within various entities of the HP organization. A multinested data model is an intertwined hierarchy: the join of separate hierarchies with different lexicons but shared data events. For our test, we have selected two major intersecting multinested user views: sales and manufacturing. Our design accommodates users with differing perspectives of this model.
       The system data structures were constructed using the REL ENGLISH primitives. The Condensed Order Records (COR) base spanned a multinested hierarchy four levels deep for the Sales Organization taxonomy and five levels deep for the Manufactured Products taxomony. The design permits the user to query the COR base about the data model itself. He can, of course, also obtain the standard statistical views of the data.
       Corporate taxonomies are dynamically changing structures. The database model needs to reflect this change. In most cases, the user wants to manipulate data at three levels in his own job taxonomy and at all levels in the other taxonomies. The multinested data model is needed in order to permit the user to view the data from the same perspective that he views his job in the corporate organization.
    A study of procedure descriptions by non-programmers BIBAFull-Text 137
      Lawrence Miller
    Providing mechanisms for inexperienced users of computer systems to program the computer to repetitively perform tasks that the user normally does in his or her daily job is one of the most challenging tasks for designers of highly interactive computer systems oriented to naive users. This report presents early results of a study conducted to ascertain the written analogues of the programming structures iteration, conditional and variables. The study required users already familiar with office procedures to practice a routine forms fill-in and data verification task over a period of one week. At the end of that time, they were required to write a set of procedures as if they were instructing a new person in the performance of the job. These written protocols (in conjunction with verbal protocols taken during the learning phase) were analyzed in terms of the above-mentioned structures.
       It was found that a variety of structures are used by naive users, but more importantly, all users made serious errors of both omission and commission. In particular, events of low probability were not described at all. In certain cases the written instructions did not correspond with the way in which users actually performed the tasks.
       The implications for office systems designers, amongst others, are explored.
    Note: (abstract only)
    The graphic design of friendly faces for information management BIBAFull-Text 137
      Aaron Marcus
    Principles of graphic design have been utilized in redesigning the interface for Seedis, a large information management system. The structure and processes of Seedis are briefly described. The graphic design approach is explained and graphic design principles are outlined. Examples of enhanced menus, prompts, help messages and data directories are shown to indicate the nature of improvements.
    Note: (abstract only)
    Human factors studies with system message styles BIBAFull-Text 138
      Ben Shneiderman
    Computer systems often contain messages which are imprecise ('SYNTAX ERROR'), hostile ('FATAL ERROR, RUN ABORTED'), cryptic ('IEH291H'), or obscure ('CTL DAMAGE, TRANS ERR'). Such messages may be acceptable to computer professionals who regularly use a specific system, but they lead to frustration for novices and for professionals who are using new features or facilities.
       We have conducted five studies using COBOL compiler syntax errors and text editor command errors to measure the impact of improving the wording of system messages. The results indicate that increased specificity, more positive tone, and greater clarity can improve correction rates and user satisfaction.
       An overview of the experimental results will be presented along with guidelines for writing system messages.
    Note: (abstract only)
    The coming world of "what you see is what you get" BIBAFull-Text 138
      Don Hatfield
    The term 'what you see is what you get' has been used to refer to the editing of fully formatted documents so that every edit change causes the text to be updated immediately to show the document as it would appear when printed, thus eliminating the immediate step of (periodically) invoking a formatter explicitly. This mode of working is generally agreed to result in more and better results with less effort, both because the real-world simulation of a document is easier to use than a mixture of format command statements and unformatted text, and because many errors show up more immediately in a real-world situation than in a complicated abstraction.
       What happens if we extend this notion throughout the interface between the user and the computer? We enter a world of constrained objects and functional (applicative) actions. If the constraints are algebraic, the result is VISICALC-Iike. If the constraints are formats, the result is format programs which are also (unfilled) documents and can be created and edited as document images. If the constraints are actions themselves, the result is islands of action-programs in a sea of constraints.
       We propose, as the user interface, a general constraints language for documents. The documents are also "templates" or "forms", and have a robustness that makes them hard to injure. Anything may be represented as a document, from a memo to a database to a protein molecule. The commands for applying constraints all take no arguments other than the thing the user is pointing at when the command is given. The user's world is then like a large Tinkertoy environment, for constructing active and passive things.
       Examples of working in this world, in black and white and in color, will be given covering traditional text operations, the construction and use of document templates, the equivalent of programming as we know it, the equivalent of programming as we don't know it, and finally a John Milton template to test the relation between Paradise Lost and the fundamental theorem of the calculus.
    Note: (abstract only)
    Design issues for online documentation systems BIBAFull-Text 139
      Carolyn P. Steinhaus
    The design of an effective interactive documentation system is introduced by tracing a hypothetical development effort aimed at shifting information contained in printed volumes of documentation to a form suitable for interactive access. Taking this approach presents a view of online documentation systems as the result of the process of adapting information conveyed in printed volumes to the constraint of interactive software considered as an information medium. The resulting discussion necessarily involves consideration of the demands which interactive software makes on both the organization of information about interactive programs and on the cognitive capacities of people using it.
       Existing documentation is typically intended to serve all of the informational needs of any person who uses an interactive software system. The requirements of software systems for structure and precision demand a more detailed understanding than currently exists of exactly how to provide information to people of varying levels of experience with a particular program or with computing in general. The interaction between the purposes of existing documentation and the requirements of an online system provide an interesting context for discussion of the major issues facing the designer of an interactive documentation system.
    Note: (abstract only)
    Naive user behavior in a restricted interactive command environment BIBAFull-Text 139
      Allan G. Haggett; John R. McFadden; Peter R. Newsted
    Results are reported showing the changing pattern of command use by introductory business data processing students. Using the ability of the University of Calgary's Honeywell Multics Operating System to tailor a command and response environment, a subset of commands and responses (called GENIE) was set up in a user-friendly environment to facilitate novice students programming at CRT terminals. Frequency and time of usage of all commands was metered and changing patterns of usage emerged as the semester progressed. For example, "help" usage -- which was originally quite extensive and broad -- limited itself over time to questions only about specific topics. Reluctance to use an "audit" facility to capture an interactive session disappeared as the commands for such usage were likened to a movie camera taking pictures over a student's shoulder. It is further noted that specific emphasis was placed on simplifying commands and reducing options.
       The whole idea of a restricted command environment is compared to the "abstract machine" concept of Hopper, Kugler, and Unger who are building a universal command and response language (NICOLA, a NIce Standard COmmand LAnguage). GENIE is seen as an example of what such an abstract machine could be if the Multics operating system were viewed as a basic or "parent" abstract machine. Interactive environments such as Multics provides are viewed as essential to providing a satisfactory timesharing system for the various, but frequently intermittent uses, in the social sciences.
    Note: (abstract only)
    Design considerations for data base facilities on a desk top BIBAFull-Text 140
      Susanne S. Cochran
    The price of computing equipment is decreasing at a rate of about 30% per year and the cost of professional time is steadily increasing, driving industry to focus on improving professional productivity.
       Computer-aided engineering (design, analysis, research, testing, and planning) is a problem area where professional creativity and equipment flexibility are of paramount importance to success. Engineers and scientists are not typically computer professionals; they intimately understand the application at hand and do not want to he bogged down either with computerese and 25 manuals which might contain a desired answer, or with explaining enough of the problem to a computer professional to have the program written by someone else. Ideally, the problem solution should come from the engineer or scientist when viewed from an efficiency perspective. In such application areas, price/performance is no longer the primary factor in selecting computing equipment; ease of adaptability and availability/accessibility are becoming more important criteria when identifying a computer which can provide effective man/machine synergy.
       The HP 9845 Computing System has the HP IMAGE DBM System capability available as a tool for its users. To help non-computer people to design data bases (the most difficult and frightening part of using data base), we have created a data base design kit manual. This manual will guide a user, through either an intuitive or a rigorous design technique, from problem definition to a working data base diagram. From this diagram, the user is ready to define, create, and use the data base. For this, we have developed a general purpose data base management program, called QUERY/45. QUERY/45 can define and create data bases, and also provides updating facilities including adding, modifying, and deleting information with or without user-defined forms. All of the helps and teaching tools will enable engineers and scientists to use a data base without having to write any programs. After they become more experienced, the helps and menus can be bypassed in favor of formal command mode.
       The human factors engineering in the design of this program helps the computer system to become a partner in problem solving for the engineer or scientist.
    Note: (abstract only)
    Learning effectiveness: The impact of response time BIBAFull-Text 140
      Sherry Weinberg
    Response time is one of the key components of the human interface in an interactive computer system. This study evaluated two different response times and their impact on learning effectiveness. Using a counterbalanced experimental design (2**2 combinations of 2 response times), this study measured completion times, lesson mastery, error rates, and attitude. Data were obtained from student questionnaires.
       The Control Data PLATO Computer-based Education system provided the environment for the study. The system was connected to two networks with different response time characteristics. The means of the two response times tested were .25 sec (response time A) and 1.3 seconds (response time B). The covariate analysis of variance and chi square tests were used to show the significant difference between the two response times (p < .05), giving the following results:
  • 1. The subjects using the shorter response time finished the lessons
        significantly faster than the subjects using the longer response time.
  • 2. The number of subjects that mastered the lessons was significantly higher
        for the subjects using the shorter response time.
  • 3. The performance of subjects using the shorter response time for time
        dependent tasks was significantly better than the subjects using the longer
        response time. However, for time independent tasks, the subjects using the
        longer response time performed significantly better.
  • 4. The subjects using the faster response time showed significantly more
        favorable attitudes toward the response time experienced than the subjects
        using the slower response time.
       In conclusion, the shorter response time (A) was more efficient for learning and was more favored by students.
    Note: (abstract only)
  • Concise natural language interaction BIBAFull-Text 141
      Paul Roller Michaelis; James A. Hendler
    Advances in both hardware and software continue to make it possible to design user oriented systems more easily. Because we have not had a language for describing the user orientation of computer systems, a variety of interpersonal metaphors have been used to aid in the comparative evaluations of systems. Recent cultural history has shaped the semantics of computer systems. Out of the turbulent, liberal strains of the 1960s emerged the movement to humanize computer systems. During the self-centered backlash of the 1970s the term friendly became a computer household word. During the 1980s we need to grow beyond a concern for friendliness alone and build systems that are considerate.
       Consideration supersedes friendliness in at least three major ways, First, it goes beyond satisfaction by focusing upon attempts to help and assist others. Secondly, it requires that a person take the role of another and take the other's needs into account. Thirdly, to be considerate is to be courteous and, most importantly, respectful. In these respects, the metaphor of the considerate system points to the essence of user orientation without sacrificing other critical system features such as productivity. In fact, truly considerate systems will facilitate productivity because of improved communication clarity, greater tolerance for user errors and idiosyncrasies, and increased availability of options, i.e., user-directed socio-computer interaction.
       Designing and developing considerate systems is not easy and requires considerable time and effort. Representative users must he involved in the selection of system features and in the process (formative) evaluation as well as the outcome (summative) evaluation. Consequently, there is a very necessary and essential role for the social scientist in the development of present day socio-computer systems.
    Note: (abstract only)
    Issues for Ease of Use in personal computing BIBAFull-Text 141
      Harry Tennant
    It has been demonstrated that interactive natural language dialog is remarkably unruly, with many misspellings and grammatical errors. Although progress has been made in getting computers to process pristine English text, the day when computers will be able to process unlimited interactive natural language dialog is still very far off.
       The vast majority of the effort that has gone into designing interactive natural language systems has concentrated on the computer half of the human-computer dyad. Our approach concentrates on the human half. Specifically, the goal of our research is to define a human engineered subset of natural language that retains all of the user-oriented benefits of unrestricted natural language dialog, while greatly reducing the processing burden that true natural language interaction places on the computer. This paper is a preliminary examination of the possibility that these criteria may be satisfied by simply asking users to be concise.
    Note: (abstract only)
    Designing considerate systems BIBAFull-Text 141
      Ronald E. Anderson
    Ease of Use can be thought of as consisting of two components: Ease of Learning and Ease of Doing. In the past, most of the attention in discussions of Ease of Use has focused on Ease of Learning. This is the motivation behind consideration for the "naive" or "casual" user. The most common approach has been to allow trading computing functionality for Ease of Learning. This makes the most commonly performed tasks very simple to perform, but prevents a wide range of other tasks from being performed at all. This affects Ease of Doing.
       Ease of Doing is a concept that has been primarily associated with expert users of computing systems. A task is only Easy to Do on a computer if the proper tools have been provided for doing it. Since there is an enormous range of tasks to apply systems to, there must also be a large collection of tools. A great variety of software tools that are finely tuned to particular applications should be made available to users. In addition, the system should be extensible to allow for ready customization.
       We feel that a sophisticated personal computing environment must provide a quick path for casual users to be able to operate parts of the system, and yet allow more habitual users a path to gain mastery over the more esoteric components of the system with time.
    Note: (abstract only)
    A study of entity-based database interfaces BIBAFull-Text 142
      M. M. Mantei; R. G. G. Cattell
    A study is presented of a database system interface in which an entity (a concept) and the relationships in which it is involved are displayed to the user: the user is permitted to move about in the database by selecting entities related to the current one displayed. The database system is intended as a personalized database (PDB) for a scientist, student, manager, or anyone who has a need for a fast mechanism for storing and organizing a wide variety of information. The study is exploratory recording baseline times and types of behavior for a variety of personal information management tasks performed by one individual. Data entry, information retrieval, and browsing behavior are examined and contrasted to behavior with more conventional storage media.
    Note: (abstract only)
    An editor-based programming support environment BIBAFull-Text 142
      W. J. Hansen
    Users of interactive systems typically must deal with numerous interactive interfaces, including especially the text editor and the system command interpreter. Unfortunately, the various interfaces too often have differing and even conflicting conventions. This paper suggests that an enhanced text editor can serve as the interactive interface for most purposes. For example, consider the file directory instead of choosing among half a dozen or more system commands to view and modify it, the user can edit an image that represents the directory. Deletion, renaming, and movement to another directory are easily accomplished with ordinary editor commands. Other system commands can he supplanted by a mechanism of "creation sequences" for files. Rather than execute the creation sequence, the user simply asks to view the file resulting from it.
       To facilitate this form of interaction, the text editor must include some novel features. It must permit structured files; where the structure can be a field structure within records or a hierarchical structure between records. A suitable editor is sketched.
    Note: (abstract only)
    A contribution towards the measurement of user behavior BIBAFull-Text 142
      Helmut Wilke
    A prerequisite for the design of better systems - in terms of human interface- is knowledge of its users, their problems and behavior. Within the context of a larger project comparing several large statistical program packages, attempts have been made to attack the problem of "knowing the user'. Among traditional methods like surveys, different ways of automatic data collection have been tried and their strengths and weaknesses can be discussed. A particularly powerful tool proved to be a logfile which is automatically updated each time certain software is used. It contains individual level data about size of job and data set, control cards and -statistical procedures used, types of errors and more. This gives valuable insights about:--the structure of the user community.--styles of package use,--weak points of packages.
       In my paper I will discuss some general problems of recording and analyzing user information, and will present data from the logfile described. This should he considered as an example in the methodological discussion as well as a substantive contribution to the analysis of SPSS-use and -users.
    Note: (abstract only)
    What makes computer games fun? BIBAFull-Text 143
      Thomas Malone
    One can't deny the effectiveness of video arcade games in reachipg users! Just loop at the number of quarters pushed into the slots, the time spent by people of widely differing abilities, and the number of repeat encounters with the systems. At least part of the success is due to the ease of getting started (the first play of the game gets one comfortable with the procedures), the high degree of visualization of controls and results, and the responsiveness overall. Other factors will be taken up by the panelists.
       Review of the home computer market shows what can be accomplished by an easy-to-use accounting aid through advertising store demonstrations, and word of mouth. Visicalc has sold over a million dollars! Attendees will have an opportunity to try some of these impressive applications before and after the session.
    Note: (abstract only)
    What can be learned from arcade games and home computer applications? (A Panel Discussion): The case for considering games and home applications BIBAFull-Text 143
      Karl L. Zinn
    The presentation deals with two questions:
  • 1) What makes games so captivating?
  • 2) How can the same features (that make computer games captivating) be used to
        make other user interfaces more interesting and enjoyable to use? First, three empirical studies are described. These studies analyze which features of several computer games are most important in making the games enjoyable. Then a set of heuristics for incorporating these features in other user interfaces will be outlined. The heuristics are organized in three categories: challenge, fantasy and curiosity.
    Note: (abstract only)
  • Direct manipulation: A step beyond programming languages BIBAFull-Text 143
      Ben Shneiderman
    Direct manipulation is a style of interaction which has been used by implementers of widely varying systems. Direct manipulation permits novice users access to powerful facilities without the burden of learning to use a complex syntax and lengthy list of commands. Display editors use direct manipulation more than line editors. Form-fill-in is more direct than tag fields and delimiters. Spatial data management is more direct than query-by-example, which is more direct than SEQUEL. Computer arcade games and Visicalc are further examples.
       Direct manipulation involves three interrelated techniques:
  • 1. Provide a physically direct way of moving a cursor or manipulating the
        objects of interest.
  • 2. Present a concrete visual representation of the objects of interest and
        immediately change the view to reflect operations.
  • 3. Avoid using a command language and depend? on operations applied to the
        cognitive model which is shown on the display.
    Note: (abstract only)
  • Learning how to confer: The interplay of theory and practice in computer conferencing BIBAFull-Text 144
      Robert Parnes
    Members of the Merit staff first met Robert Parnes in the fall of 1975 and began participating in his experimental CONFERence shortly thereafter. It soon became evident that CONFER could help us provide consultation to our users, who were distributed over a large part of southeastern Michigan, and in January of 1976 Merit started what we believe to be the first CONFERence open to the general public, MNET: CAUCUS.
       Five years later CAUCUS is still alive and well, and we still use it to provide help to a widely-dispersed user community--in fact, with the advent of Telenet service later that same year, and with Telenet's subsequent expansion of service to Canadian and overseas networks, our users are spread all over the world. But we have learned over the years that computer conferencing is good for much more than simply facilitating the user-consultant relationship. As we gained experience with CONFER we found that it gave us a solution to problems that were so basic we had simply taken them as part of the environment. CONFER also provided a medium for communication among consultants--the Merit staff--and among users.
    Note: (abstract only)
    Case study of a user-oriented conferencing system BIBAFull-Text 144
      Karl L. Zinn
    This session described how CONFER, a computer-based conferencing system at the University of Michigan, was developed with participation of users, and what impact the system has on communities of users. At this conference it may be especially interesting to discuss various ways in which help provided to new users has evolved. Some extrapolations may be made for other than electronic communication aids: orientation and training, on-line reference information, on-line consultation, etc.
    Note: (abstract only)
    The CONFER experience of the Merit Computer Network BIBAFull-Text 144-145
      Christine Wendt
    CONFER emerged from a concern with small group governance in both its communications and decision making dimensions. This context will be described as well as the principles operationalized in the CONFER system: individual equality, freedom, privacy and flexibility, and the facilitation of individual participation. CONFER is based on the proposition that effective communication is an active process for all concerned. This activity is strongly encouraged in CONFER through a number of mechanisms designed to facilitate interaction between the user and the CONFER system, as well as interaction among all the users of the system. As well, growth of the system over time is promoted by interaction of the system designer with the user community.
    Note: (abstract only)
    Uses of CONFER at Wayne State University BIBAFull-Text 145
      Alan McCord
    CONFER is used by a variety of organizations which have no direct contact with any MTS site (the CONFER host computer system). In many cases, the main contact of the CONFER user is with the third party vendor or consultant who is supporting the networking effort. Several problems arise when the third party relationship is not broadened to include the institutional consulting and documentation efforts. Such problems include:
  • 1. Overload on the time of the third-party consultant.
  • 2. Production of custom documentation which parallels the institutional
        documentation.
  • 3. User perceptions that they are not "ready" for institutional documentation.
  • 4. Development of abbreviated cognitive maps of the system and its user
        network.
  • 5. Users exchange superstitious views of how the system works. While it is not yet clear that the absorption of third-party, mediated networks will solve such problems, it is an obvious first step to a solution.
    Note: (abstract only)
  • Third party consulting in the network environment BIBAFull-Text 145
      Richard C. Roistacher
    CONFER, first used at Wayne State University in 1979, has proven itself to be an extremely useful and adaptable communications medium. Some examples of present CONFER applications at WSU are: Computing Center Staff CONFERencePresently used to coordinate communication between the 180 staff members of the CSC. Communications between various departments isolated by distance and responsibility has been improved.
       Specialty CONFERencesCONFERences exist on such varied topics as school transportation, text processing, nursing education, microcomputers and instructional technology.
       Project ManagementUsed by the University's PLATO development staff for cross-campus management decision-making and communication. CONFER has reduced the need for staff meetings, has served as a "tracking device" for personnel appointments, and has kept a detailed log of project decisions. CONFER has also been used for CSC project development, notably for the design of an MTS Help Facility.
       Academic CommunicationsCONFER has been used to facilitate communication between students in an undergraduate Computer Science course, for graduate students in Instructional Technology, and for individual student projects. This summer, a CONFERence will be implemented which will manage course communications for 300 students in a freshman Computer Science course.
    Note: (abstract only)
    Conference on easier and more productive use of computing systems BIBFull-Text 146-149