HCI Bibliography Home | HCI Conferences | DocEng Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
DocEng Tables of Contents: 0102030405060708091011121314

Proceedings of the 2014 ACM Symposium on Document Engineering

Fullname:Proceedings of the 2014 ACM Symposium on Document Engineering
Editors:Steven Simske; Sebastian Rönnau
Location:Fort Collins, Colorado
Dates:2014-Sep-16 to 2014-Sep-19
Publisher:ACM
Standard No:ISBN: 978-1-4503-2949-1; ACM DL: Table of Contents; hcibib: DocEng14
Papers:35
Pages:214
Links:Conference Website
  1. Keynote address
  2. Modeling and representation
  3. Document analysis I
  4. Document analysis II
  5. Keynote address
  6. Collections, systems and management
  7. Applications I
  8. Generation, manipulation and presentation
  9. Applications II
  10. Workshops & tutorial

Keynote address

The evolving scholarly record: new uses and new forms BIBAFull-Text 1-2
  Clifford A. Lynch
This presentation will take a very broad view of the emergence of literary corpora as objects of computation, with a particular focus on the various literatures and genres that form the scholarly record. The developments and implications here that I will explore include: the evolution of the scholarly literature into a semi-structured network of information used by both human readers and computational agents through the introduction of markup technologies; the interpenetration and interweaving of data and evidence with the literature; and the creation of an invisible infrastructure of names, taxonomies and ontologies, and the challenges this presents.
   Primary forms of computation on this corpus include both comprehensive text mining and stream analysis (focused on what's new and what's changing as the base of literature and related factual databases expand with reports of new discoveries). I'll explore some of the developments in this area, including some practical considerations about platforms, licensing, and access.
   As the use of the literature evolves, so do the individual genres that comprise it. Today's typical digital journal article looks almost identical to one half a century old, except that it is viewed on screen and printed on demand. Yet there is a great deal of activity driven by the move to data and computationally intensive scholarship, demands for greater precision and replicability in scientific communication, and related sources to move journal articles "beyond the PDF," reconsidering relationships among traditional texts, software, workflows, data and the broad cultural record in its role as evidence. I'll look briefly at some of these developments, with particular focus on what this may mean for the management of the scholarly record as a whole, and also briefly discuss some parallel challenges emerging in scholarly monographs.
   Finally, I will close with a very brief discussion of what might be called corpus-scale thinking with regard to the scholarly record at the disciplinary level. I'll briefly discuss the findings of a 2014 National Research Council study that I co-chaired dealing with the future of the mathematics literature and the possibility of creating a global digital mathematics library, as well as offering some comments on developments in the life sciences. I will also consider the emergence of new corpus-wide tools and standards, such as Web-scale annotation, and some of their implications.

Modeling and representation

ActiveTimesheets: extending web-based multimedia documents with dynamic modification and reuse features BIBAFull-Text 3-12
  Diogo S. Martins; Maria da Graça C. Pimentel
Methods for authoring Web-based multimedia presentations have advanced considerably with the improvements provided by HTML5. However, authors of these multimedia presentations still lack expressive, declarative language constructs to encode synchronized multimedia scenarios. The SMIL Timesheets language is a serious contender to tackle this problem as it provides alternatives to associate a declarative timing specification to an HTML document. However, in its current form, the SMIL Timesheets language does not meet important requirements observed in Web-based multimedia applications. In order to tackle this problem, this paper presents the ActiveTimesheets engine, which extends the SMIL Timesheets language by providing dynamic client-side modifications, temporal linking and reuse of temporal constructs in fine granularity. All these contributions are demonstrated in the context of a Web-based annotation and extension tool for multimedia documents.
Automated refactoring for size reduction of CSS style sheets BIBAFull-Text 13-16
  Martí Bosch; Pierre Genevès; Nabil Layaïida
Cascading Style Sheets (CSS) is a standard language for stylizing and formatting web documents. Its role in web user experience becomes increasingly important. However, CSS files tend to be designed from a result-driven point of view, without much attention devoted to the CSS file structure as long as it produces the desired results. Furthermore, the rendering intended in the browser is often checked and debugged with a document instance. Style sheets normally apply to a set of documents, therefore modifications added while focusing on a particular instance might affect other documents of the set.
   We present a first prototype of static CSS semantical analyzer and optimizer that is capable of automatically detecting and removing redundant property declarations and rules. We build on earlier work on tree logics to locate redundancies due to the semantics of selectors and properties. Existing purely syntactic CSS optimizers might be used in conjunction with our tool, for performing complementary (and orthogonal) size reduction, toward the common goal of providing smaller and cleaner CSS files.
FlexiFont: a flexible system to generate personal font libraries BIBAFull-Text 17-20
  Wanqiong Pan; Zhouhui Lian; Rongju Sun; Yingmin Tang; Jianguo Xiao
This paper proposes FlexiFont, a system designed to generate personal font libraries from the camera-captured character images. Compared with existing methods, our system is able to process most kinds of languages and the generated font libraries can be extended by adding new characters based on the user's requirement. Moreover, digital cameras instead of scanners are chosen as the input devices, so that it is more convenient for common people to use the system. First of all, the users should choose a default template or define their own templates, then write the characters on the printed templates according to the certain instructions. After the users upload the photos of the templates with written characters, the system will automatically correct the perspective and split the whole photo into a set of individual character images. As the final step, FlexiFont will denoise, vectorize, and normalize each character image before storing it into a TrueType file. Experimental results demonstrate the robustness and efficiency of our system.
Circular coding with interleaving phase BIBAFull-Text 21-24
  Robert Ulichney; Matthew Gaubatz; Steven Simske
A general two-dimensional coding method is presented that allows recovery of data based on only a cropped portion of the code, and without knowledge of the carrier image. A description of both an encoding and recovery system is provided. Our solution involves repeating a payload with a fixed number of bits, assigning one bit to every symbol in the image -- whether that symbol is data carrying or non-data carrying -- with the goal of guaranteeing recovery of all the bits in the payload. Because the technique is applied to images, for aesthetic reasons we do not use fiducials, and do not employ any end-of-payload symbols. The beginning of the payload is determined by a phase code that is interleaved between groups of payload rows. The recovery system finds the phase row by evaluating candidate rows, and ranks confidence based on the sample variance. The target application is data-bearing clustered-dot halftones, so special consideration is given to the resulting checkerboard subsampling. This particular application is examined via exhaustive simulations to quantify the likelihood of unrecoverable bits and bit redundancy as a function of offset, crop window size, and phase code spacing.

Document analysis I

A new sentence similarity assessment measure based on a three-layer sentence representation BIBAFull-Text 25-34
  Rafael Ferreira; Rafael Dueire Lins; Fred Freitas; Steven J. Simske; Marcelo Riss
Sentence similarity is used to measure the degree of likelihood between sentences. It is used in many natural language applications, such as text summarization, information retrieval, text categorization, and machine translation. The current methods for assessing sentence similarity represent sentences as vectors of bag of words or the syntactic information of the words in the sentence. The degree of likelihood between phrases is calculated by composing the similarity between the words in the sentences. Two important concerns in the area, the meaning problem and the word order, are not handled, however. This paper proposes a new sentence similarity assessment measure that largely improves and refines a recently published method that takes into account the lexical, syntactic and semantic components of sentences. The new method proposed here was benchmarked using a publically available standard dataset. The results obtained show that the new similarity assessment measure proposed outperforms the state of the art systems and achieve results comparable to the evaluation made by humans.
Paper stitching using maximum tolerant seam under local distortions BIBAFull-Text 35-44
  Wei Fan; Jun Sun; Naoi Satoshi
Paper stitching technology can reconstruct a whole paper page from two sub-images separately scanned from a camera with limited vision field.
   Traditional technology usually chooses a global optimal seam, and the two sub-images are stitched along it. These methods perform well on the rigid object, but when distortion exists caused by the uneven placement of paper, local contents of two sub-images may be upside-down and their positions are misaligned. Although some methods choose two matching seams on each sub-image, they use either the local patch similarity or the global consistent constraint to get two matching seams. However, only the local matching may lead to stitching failure when wrong matching occurs at the local patch, while only the global constraint usually suffers from inaccuracy of the stitching result. After the two seams are obtained, the traditional methods usually construct the whole image through global transformation along the seams, and image deformation usually occurs in this stage.
   In this paper, we proposed a robust estimation algorithm to get the matched seams in the sub-images, and stitched the sub-images with a maximum tolerance to conquer the image deformation. Finally a whole image with a smooth stitching seam and the minimum deformation is generated. Experimental results show that this new paper stitching method can produce better results than state-of-arts methods even under challenging scenarios such as large distortion and large contrast difference.
Abstract argumentation for reading order detection BIBAFull-Text 45-48
  Stefano Ferilli; Domenico Grieco; Domenico Redavid; Floriana Esposito
Detecting the reading order among the layout components of a document's page is fundamental to ensure effectiveness or even applicability of subsequent content extraction steps. While in single-column documents the reading flow can be straightforwardly determined, in more complex documents the task may become very hard. This paper proposes an automatic strategy for identifying the correct reading order of a document page's components based on abstract argumentation. The technique is unsupervised, and works on any kind of document based only on general assumptions about how humans behave when reading documents. Experimental results show that it is effective in more complex cases, and requires less background knowledge, than previous solutions that have been proposed in the literature.
Generating summary documents for a variable-quality PDF document collection BIBAFull-Text 49-52
  Jacob Hughes; David F. Brailsford; Steven R. Bagley; Clive E. Adams
The Cochrane Schizophrenia Group's Register of studies details all aspects of the effects of treating people with schizophrenia. It has been gathered over the last 20 years and consists of around 20,000 documents, overwhelmingly in PDF. Document collections of this sort -- on a given theme but gathered from a wide range of sources -- will generally have huge variability in the quality of the PDF, particularly with respect to the key property of text searchability.
   Summarising the results from the best of these papers, to allow evidence-based health care decision making, has so far been done by manually creating a summary document, starting from a visual inspection of the relevant PDF file. This labour-intensive process has resulted, to date, in only 4,000 of the papers being summarised -- with enormous duplication of effort and with many issues around the validity and reliability of the data extraction.
   This paper describes a pilot project to provide a computer-assisted framework in which any of the PDF documents could be searched for the occurrence of some 8,000 keywords and key phrases. Once keyword tagging has been completed the framework assists in the generation of a standard summary document, thereby greatly speeding up the production of these summaries. Early examples of the framework are described and its capabilities illustrated.

Document analysis II

Transforming graph-based sentence representations to alleviate overfitting in relation extraction BIBAFull-Text 53-62
  Rinaldo J. Lima; Jamilson Batista; Rafael Ferreira; Fred Freitas; Rafael Dueire Lins; Steven Simske; Marcelo Riss
Relation extraction (RE) aims at finding the way entities, such as person, location, organization, date, etc., depend upon each other in a text document. Ontology Population, Automatic Summarization, and Question Answering are fields in which relation extraction offers valuable solutions. A relation extraction method based on inductive logic programming that induces extraction rules suitable to identify semantic relations between entities was proposed by the authors in a previous work. This paper proposes a method to simplify graph-based representations of sentences that replaces dependency graphs of sentences by simpler ones, keeping the target entities in it. The goal is to speed up the learning phase in a RE framework, by applying several rules for graph simplification that constrain the hypothesis space for generating extraction rules. Moreover, the direct impact on the extraction performance results is also investigated. The proposed techniques outperformed some other state-of-the-art systems when assessed on two standard datasets for relation extraction in the biomedical domain.
Ruling analysis and classification of torn documents BIBAFull-Text 63-72
  Markus Diem; Florian Kleber; Robert Sablatnig
A ruling classification is presented in this paper. In contrast to state-of-the-art methods which focus on ruling line removal, ruling lines are analyzed for document clustering in the context of document snippet reassembling. First, a background patch is extracted from a snippet at a position which minimizes the inscribed content. A novel Fourier feature is then computed on the image patch. The classification into void, lined and checked is carried out using Support Vector Machines. Finally, an accurate line localization is performed by means of projection profiles and robust line fitting. The ruling classification achieves an F-score of 0.987 evaluated on a dataset comprising real world document snippets. In addition the line removal was evaluated on a synthetically generated dataset where an F-score of 0.931 is achieved. This dataset is made publicly available so as to allow for benchmarking.
On automatic text segmentation BIBAFull-Text 73-80
  Boris Dadachev; Alexander Balinsky; Helen Balinsky
Automatic text segmentation, which is the task of breaking a text into topically-consistent segments, is a fundamental problem in Natural Language Processing, Document Classification and Information Retrieval. Text segmentation can significantly improve the performance of various text mining algorithms, by splitting heterogeneous documents into homogeneous fragments and thus facilitating subsequent processing. Applications range from screening of radio communication transcripts to document summarization, from automatic document classification to information visualization, from automatic filtering to security policy enforcement -- all rely on, or can largely benefit from, automatic document segmentation. In this article, a novel approach for automatic text and data stream segmentation is presented and studied. The proposed automatic segmentation algorithm takes advantage of feature extraction and unusual behaviour detection algorithms developed in [4, 5]. It is entirely unsupervised and flexible to allow segmentation at different scales, such as short paragraphs and large sections. We also briefly review the most popular and important algorithms for automatic text segmentation and present detailed comparisons of our approach with several of those state-of-the-art algorithms.
P-GTM: privacy-preserving google tri-gram method for semantic text similarity BIBAFull-Text 81-84
  Owen Davison; Abidalrahman Mohammad; Evangelos E. Milios
This paper presents P-GTM, a privacy-preserving text similarity algorithm that extends the Google Tri-gram Method (GTM). The Google Tri-gram Method is a high-performance unsupervised semantic text similarity method based on the use of context from the Google Web 1T n-gram dataset. P-GTM computes the semantic similarity between two input bag-of-words documents on public cloud hardware, without disclosing the documents' contents. Like the GTM, P-GTM requires the uni-gram and tri-gram lists from the Google Web 1T n-gram dataset as additional inputs. The need for these additional lists makes private computation of GTM text similarities a challenging problem. P-GTM uses a combination of pre-computation, encryption, and randomized preprocessing to enable private computation of text similarities using the GTM. We discuss the security of the algorithm and quantify its privacy using standard and real life corpora.

Keynote address

Web-intrinsic interactive documents BIBAFull-Text 85-86
  Anthony Wiley
Modern interactive documents are complex applications that give the user the editing experience of editing a document as it will look in its final visual form. Sections of the document can be either editable, or read-only, and can dynamically conform artifacts like images to specific users. The components underlying interactive documents are dynamically bound variables and a complex rule engine for adapting the document as the user edits.
   Web interactive documents deliver the dynamic editing experience through the web by using a web browser for deploying the editor. Document editors built-in the web browser as a native application provide a higher quality editing experience because the editor's look and feel is consistent with the web browser's innate controls and navigation.
   The majority of traditional interactive documents have been developed using proprietary formats which are not compatible with today's web browser implementations because they were originally intended as desk-top applications. As a consequence, traditional interactive documents are not inherently web applications.
   This talk will provide an overview of the technical challenges faced in developing a web-intrinsic interactive document solution that simultaneously addresses the need for simple, yet rich, user editing features combined with the scalability, and ease of deployment, demanded by enterprises today.
   By way of example, I will introduce, and demonstrate, a new interactive document representation and deployment model. A prerequisite for such representations is that they enable documents to account for traditional document roles and still behave as intrinsic web content for document interaction. Another is that they are also able to support conventional enterprise workflows and complex processes, e.g. approvals, audit, versioning, storage and archival.

Collections, systems and management

Fine-grained change detection in structured text documents BIBAFull-Text 87-96
  Hannes Dohrn; Dirk Riehle
Detecting and understanding changes between document revisions is an important task. The acquired knowledge can be used to classify the nature of a new document revision or to support a human editor in the review process. While purely textual change detection algorithms offer fine-grained results, they do not understand the syntactic meaning of a change. By representing structured text documents as XML documents we can apply tree-to-tree correction algorithms to identify the syntactic nature of a change.
   Many algorithms for change detection in XML documents have been proposed but most of them focus on the intricacies of generic XML data and emphasize speed over the quality of the result. Structured text requires a change detection algorithm to pay close attention to the content in text nodes, however, recent algorithms treat text nodes as black boxes.
   We present an algorithm that combines the advantages of the purely textual approach with the advantages of tree-to-tree change detection by redistributing text from non-overlapping common substrings to the nodes of the trees. This allows us to not only spot changes in the structure but also in the text itself, thus achieving higher quality and a fine-grained result in linear time on average. The algorithm is evaluated by applying it to the corpus of structured text documents that can be found in the English Wikipedia.
Classifying and ranking search engine results as potential sources of plagiarism BIBAFull-Text 97-106
  Kyle Williams; Hung-Hsuan Chen; C. Lee Giles
Source retrieval for plagiarism detection involves using a search engine to retrieve candidate sources of plagiarism for a given suspicious document so that more accurate comparisons can be made. An important consideration is that only documents that are likely to be sources of plagiarism should be retrieved so as to minimize the number of unnecessary comparisons made. A supervised strategy for source retrieval is described whereby search results are classified and ranked as potential sources of plagiarism without retrieving the search result documents and using only the information available at search time. The performance of the supervised method is compared to a baseline method and shown to improve precision by up to 3.28%, recall by up to 2.6% and the F1 score by up to 3.37%. Furthermore, features are analyzed to determine which of them are most important for search result classification with features based on document and search result similarity appearing to be the most important.
An ensemble approach for text document clustering using Wikipedia concepts BIBAFull-Text 107-116
  Seyednaser Nourashrafeddin; Evangelos Milios; Drik V. Arnold
Most text clustering algorithms represent a corpus as a document-term matrix in the bag of words model. The feature values are computed based on term frequencies in documents and no semantic relatedness between terms is considered. Therefore, two semantically similar documents may sit in different clusters if they do not share any terms. One solution to this problem is to enrich the document representation using an external resource like Wikipedia. We propose a new way to integrate Wikipedia concepts in partitional text document clustering in this work. A text corpus is first represented as a document-term matrix and a document-concept matrix. Terms that exist in the corpus are then clustered based on the document-term representation. Given the term clusters, we propose two methods, one based on the document-term representation and the other one based on the document-concept representation, to find two sets of seed documents. The two sets are then used in our text clustering algorithm in an ensemble approach to cluster documents. The experimental results show that even though the document-concept representations do not result in good document clusters per se, integrating them in our ensemble approach improves the quality of document clusters significantly.
Image-based document management: aggregating collections of handwritten forms BIBAFull-Text 117-120
  John W. Barrus; Edward L. Schwartz
Many companies still operate critical business processes using paper-based forms, including customer surveys, inspections, contracts and invoices. Converting those handwritten forms to symbolic data is expensive and complicated. This paper presents an overview of the Image-Based Document Management (IBDM) system for analyzing handwritten forms without requiring conversion to symbolic data. Strokes captured in a questionnaire on a tablet are separated into fields that are then displayed in a spreadsheet. Rows represent documents while columns represent corresponding fields across all documents. IBDM allows a process owner to capture and analyze large collections of documents with minimal IT support. IBDM supports the creation of filters and queries on the data. IBDM also allows the user to request symbolic conversion of individual columns of data and permits the user to create custom views by reordering and sorting the columns. In other words, IBDM provides a "writing on paper" experience for the data collector and a web-based database experience for the analyst.

Applications I

ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF BIBAFull-Text 121-130
  Alan Souza; Viviane Moreira; Carlos Heuser
Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.
Connecting content and annotations with LiveStroke BIBAFull-Text 131-134
  Michael J. Gormish; John Barrus
One common use for interactive whiteboards (IWBs) is to mark up content provided from a connected laptop. Typically a marking layer is provided which is independent of the laptop content. This leads to problems when the laptop content changes while the strokes in the mark up layer do not. The LiveStroke prototype described in this document uses computer vision techniques to associate the marks with the image of the underlying content from the laptop. For instance, if marks are made on the first page of a document, those marks disappear when the laptop user scrolls to a different page. The marks reappear in the right location on the page when the user returns to the first page. While we have integrated these techniques with interactive whiteboards the techniques are also applicable to screen sharing with mobile touch devices and projectors.
Building digital project rooms for web meetings BIBAFull-Text 135-138
  Laurent Denoue; Scott Carter; Andreas Girgensohn; Matthew Cooper
Distributed teams must co-ordinate a variety of tasks. To do so they need to be able to create, share, and annotate documents as well as discuss plans and goals. Many workflow tools support document sharing, while other tools support videoconferencing. However, there exists little support for connecting the two. In this work, we describe a system that allows users to share and markup content during web meetings. This shared content can provide important conversational props within the context of a meeting; it can also help users review archived meetings. Users can also extract content from meetings directly into their personal notes or other workflow tools.
The virtual splitter: refactoring web applications for the multiscreen environment BIBAFull-Text 139-142
  Mira Sarkis; Cyril Concolato; Jean-Claude Dufourd
Creating web applications for the multiscreen environment is still a challenge. One approach is to transform existing single-screen applications but this has not been done yet automatically or generically. This paper proposes a refactoring system. It consists of a generic and extensible mapping phase that automatically analyzes the application content based on a semantic or a visual criterion determined by the author or the user, and prepares it for the splitting process. The system then splits the application and as a result delivers two instrumented applications ready for distribution across devices. During runtime, the system uses a mirroring phase to maintain the functionality of the distributed application and to support a dynamic splitting process. Developed as a Chrome extension, our approach is validated on several web applications, including a YouTube page and a video application from Mozilla.
SimSeerX: a similar document search engine BIBAFull-Text 143-146
  Kyle Williams; Jian Wu; C. Lee Giles
The need to find similar documents occurs in many settings, such as in plagiarism detection or research paper recommendation. Manually constructing queries to find similar documents may be overly complex, thus motivating the use of whole documents as queries. This paper introduces SimSeerX, a search engine for similar document retrieval that receives whole documents as queries and returns a ranked list of similar documents. Key to the design of SimSeerX is that is able to work with multiple similarity functions and document collections. We present the architecture and interface of SimSeerX, show its applicability with 3 different similarity functions and demonstrate its scalability on a collection of 3.5 million academic documents.

Generation, manipulation and presentation

Pagination: it's what you say, not how long it takes to say it BIBAFull-Text 147-156
  Joshua Hailpern; Niranjan Damera Venkata; Marina Danilevsky
Pagination the process of determining where to break an article across pages in a multi-article layout is a common layout challenge for most commercially printed newspapers and magazines. To date, no one has created an algorithm that determines a minimal pagination break point based on the content of the article. Existing approaches for automatic multi-article layout focus exclusively on maximizing content (number of articles) and optimizing aesthetic presentation (e.g., spacing between articles). However, disregarding the semantic information within the article can lead to overly aggressive cutting, thereby eliminating key content and potentially confusing the reader, or setting too generous of a break point, thereby leaving in superfluous content and making automatic layout more difficult. This is one of the remaining challenges on the path from manual layouts to fully automated processes that still ensure article content quality. In this work, we present a new approach to calculating a document minimal break point for the task of pagination. Our approach uses a statistical language model to predict minimal break points based on the semantic content of an article. We then compare 4 novel candidate approaches, and 4 baselines (currently in use by layout algorithms). Results from this experiment show that one of our approaches strongly outperforms the baselines and alternatives. Results from a second study suggest that humans are not able to agree on a single "best" break point. Therefore, this work shows that a semantic-based lower bound break point prediction is necessary for ideal automated document synthesis within a real-world context.
Extracting web content for personalized presentation BIBAFull-Text 157-164
  Rodrigo Chamun; Daniele Pinheiro; Diego Jornada; João Batista S. de Oliveira; Isabel Manssour
Printing web pages is usually a thankless task as the result is often a document with many badly-used pages and poor layout. Besides the actual content, superfluous web elements like menus and links are often present and in a printed version they are commonly perceived as an annoyance. Therefore, a solution for obtaining cleaner versions for printing is to detect parts of the page that the reader wants to consume, eliminating unnecessary elements and filtering the "true" content of the web page. In addition, the same solution may be used online to present cleaner versions of web pages, discarding any elements that the user wishes to avoid.
   In this paper we present a novel approach to implement such filtering. The method is interactive at first: The user samples items that are to be preserved on the page and thereafter everything that is not similar to the samples is removed from the page. This is achieved by comparing the path of all elements on the DOM representation of the page with the path of the elements sampled by the user and preserving only elements that have a path "similar" to the sample. The introduction of a similarity measure adds an important degree of adaptability to the needs of different users and applications.
   This approach is quite general and may be applied to any XML tree that has labeled nodes. We use HTML as a case study and present a Google Chrome extension that implements the approach as well as a user study comparing our results with commercial results.
Truncation: all the news that fits we'll print BIBAFull-Text 165-174
  Joshua Hailpern; Niranjan Damera Venkata; Marina Danilevsky
A news article generally contains a high-level overview of the facts early on, followed by paragraphs of more detailed information. This structure allows copy editors to truncate the latter paragraphs of an article in order to satisfy space limitations without losing critical information. Existing approaches to this problem of automatic multi-article layout focus exclusively on maximizing content and aesthetics. However, no algorithm can determine how "good" a truncation point is based on the semantic content, or article readability. Yet, disregarding the semantic information within the article can lead to either overly aggressive cutting, thereby eliminating key content and potentially confusing the reader; conversely, it may set too generous of a truncation point, thus leaving in superfluous content and making automatic layout more difficult. This is one of the remaining challenges on the path from manual layouts to fully automated processes with high quality output. In this work, we present a new semantic-focused approach to rate the quality of a truncation point. We built models based on results from an extensive user study on over 700 news articles. Further results show that existing techniques over-cut content. We demonstrate the layout impact through a second evaluation that implements our models in the first layout approach that integrates both layout and semantic quality. The primary contribution of this work is the demonstration that semantic-based modeling is critical for high-quality automated document synthesis within a real-world context.
JAR tool: using document analysis for improving the throughput of high performance printing environments BIBAFull-Text 175-178
  Mariana Kolberg; Luiz Gustavo Fernandes; Mateus Raeder; Carolina Fonseca
Digital printers have consistently improved their speed in the past years. Meanwhile, the need for document personalization and customization has increased. As a consequence of these two facts, the traditional rasterization process has become a highly demanding computational step in the printing workflow. Moreover, Print Service Providers are now using multiple RIP engines to speed up the whole document rasterization process, and depending on the input document characteristics the rasterization process may not achieve the print-engine speed creating a unwanted bottleneck. In this scenario, we developed a tool called Job Adaptive Router (JAR) aiming at improving the throughput of the rasterization process through a clever load balance among RIP engines which is based on information obtained by the analysis of input documents content. Furthermore, along with this tool we propose some strategies that consider relevant characteristics of documents, such as transparency and reusability of images, to split the job in a more intelligent way. The obtained results confirm that the use of the proposed tool improved the rasterization process performance.

Applications II

Humanist-centric tools for big data: Berkeley prosopography services BIBAFull-Text 179-188
  Patrick Schmitz; Laurie Pearce
In this paper, we describe Berkeley Prosopography Services (BPS), a new set of tools for prosopography -- the identification of individuals and study of their interactions -- in support of humanities research. Prosopography is an example of "big data" in the humanities, characterized not by the size of the datasets, but by the way that computational and data-driven methods can transform scholarly workflows. BPS is based upon re-usable infrastructure, supporting generalized web services for corpus management, social network analysis, and visualization. The BPS disambiguation model is a formal implementation of the traditional heuristics used by humanists, and supports plug-in rules for adaptation to a wide range of domain corpora. A workspace model supports exploratory research and collaboration. We contrast the BPS model of configurable heuristic rules to other approaches for automated text analysis, and explain how our model facilitates interpretation by humanist researchers. We describe the significance of the BPS assertion model in which researchers assert conclusions or possibilities, allowing them to override automated inference, to explore ideas in what-if scenarios, and to formally publish and subscribe-to asserted annotations among colleagues, and/or with students. We present an initial evaluation of researchers' experience using the tools to study corpora of cuneiform tablets, and describe plans to expand the application of the tools to a broader range of corpora.
The impact of prior knowledge on searching in software documentation BIBAFull-Text 189-198
  Klaas Andries de Graaf; Peng Liang; Antony Tang; Hans van Vliet
Software documents are used to capture and communicate knowledge in software projects. It is important that this knowledge can be retrieved efficiently and effectively, to prevent wasted time and errors that negatively affect the quality of software. In this paper we investigate how software professionals search for knowledge in documentation. We studied the search behaviour of professionals in industry. Prior knowledge helps professionals to search software documents efficiently and effectively. However, it can also misguide professionals to an incomplete search.
What academics want when reading digitally BIBAFull-Text 199-202
  Juliane Franze; Kim Marriott; Michael Wybrow
Researchers constantly read and annotate academic documents. While almost all documents are provided digitally, many are still printed and read on paper. We surveyed 162 academics in order to better understand their reading habits and preferences. We were particularly interested in understanding the barriers to digital reading and the features desired by academics for digital reading applications.
A platform for language independent summarization BIBAFull-Text 203-206
  Luciano de Souza Cabral; Rafael Dueire Lins; Rafael Fe Mello; Fred Freitas; Bruno Ávila; Steven Simske; Marcelo Riss
The text data available on the Internet is not only huge in volume, but also in diversity of subject, quality and idiom. Such factors make it infeasible to efficiently scavenge useful information from it. Automatic text summarization is a possible solution for efficiently addressing such a problem, because it aims to sieve the relevant information in documents by creating shorter versions of the text. However, most of the techniques and tools available for automatic text summarization are designed only for the English language, which is a severe restriction. There are multilingual platforms that support, at most, 2 languages. This paper proposes a language independent summarization platform that provides corpus acquisition, language classification, translation and text summarization for 25 different languages.

Workshops & tutorial

Document changes: modeling, detection, storage and visualization (DChanges 2014) BIBAFull-Text 207-208
  Gioele Barabucci; Uwe M. Borghoff; Angelo Di Iorio; Sonja Maier; Ethan Munson
With collaborative tools getting more and more widespread, users have started to become accustomized to features like automatic versioning of their documents or the visualization of changes made by other users. The research community, however, sees that the state of the current tools is seriously lack lusting. The second edition of the DChanges workshop focuses on these shortcomings, introducing new ways to produce version-aware documents and merge changes from multiple sources. Other aspects -- in particular, the standardization of formats for tracking changes -- are discussed, too.
   The gathering is also an occasion to follow up on the projects that were discussed or presented during DChanges 2013, and to foster new collaborations among researchers.
Semantic analysis of documents workshop (SemADoc): extended abstract BIBFull-Text 209-210
  Evangelos Milios; Carlotta Domeniconi
DH-CASE II: collaborative annotations in shared environments: metadata, tools and techniques in the digital humanities BIBAFull-Text 211-212
  Patrick Schmitz; Laurie Pearce; Quinn Dombrowski
The DH-CASE II Workshop, held in conjunction with ACM Document Engineering 2014, focused on the tools and environments that support annotation, broadly defined, including modeling, authoring, analysis, publication and sharing. Participants explored shared challenges and differing approaches, seeking to identify emerging best practices, as well as those approaches that may have potential for wider application or influence.
DOCENG 2014: PDF tutorial BIBAFull-Text 213-214
  Steven R. Bagley; Matthew R. B. Hardy
Many billions of documents are stored in the Portable Document Format (PDF). These documents contain a wealth of information and yet PDF is often seen as an inaccessible format and, for that reason, often gets a very bad press. In this tutorial, we get under the hood of PDF and analyze the poor practices that cause PDF files to be inaccessible. We discuss how to access the text and graphics within a PDF and we identify those features of PDF that can be used to make the information much more accessible. We also discuss some of the new ISO standards that provide profiles for producing Accessible PDF files.