| Augmenting the Web Through Open Hypermedia | | BIBA | 3-25 | |
| Niels Olof Bouvin | |||
| Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment. | |||
| XLinkProxy: External Databases with XLink (Technical Note) | | BIBA | 27-37 | |
| Paolo Ciancarini; Frederico Folli; Davide Rossi; Fabio Vitali | |||
| Information on the Web can be reached using different paths. Search engines,
directories, bookmarks are usual methods to access information but they don't
take into account the "natural" way by which content is organized in the Web:
linking. Documents with related information often refer each other, building
Web domains with a common subject.
Given the actual linking model of the Web the creation of such domains is fully delegated to the authors of each single document, which often results in sub-optimal coverage of the subject. XLinkProxy combines external links databases, a filtering proxy, dynamic HTML, XLink and XPointer to provide a solution that encompasses the limitations of the current linking model allowing the definition of multidestination custom links that can be associated to any HTML or XML document published on the Web. | |||
| Scalable Web Page Entanglement | | BIBA | 39-58 | |
| Jason Rohrer | |||
| We present a proxy-based system for augmenting the capabilities of the World
Wide Web. Our system adds two-way association links and automatically removes
these links when they break, while the existing web features only one-way links
and lingering broken links. Several web augmentation systems have been
developed in the past that add two-way links. Our key contribution is in terms
of link management, which in our system is dynamic and completely automatic.
Links between web pages are added and removed according to popular web
traversal paths, freeing both page owners and readers from the burden of link
creation and maintenance. Links can form between pages that do not link to each
other at all, reflecting the fact that readers have associated these pages with
each other--we described such pages as entangled.
We use variations on common peer-to-peer techniques to build a scalable system out of a dynamic set of proxy peers. Proxy-to-proxy communication takes place entirely over http, ensuring compatibility with existing infrastructures. A working implementation of our system is available at http://tangle.sourceforge.net/ | |||
| Using Maps and Landmarks for Navigation Between Closed and Open Corpus Hypermedia in Web-based Education | | BIBA | 59-82 | |
| Peter Brusilovsky; Riccardo Rizzo | |||
| This paper focuses on the problem of building links from closed to open corpus Web pages in the context of Web-based education. As a possible solution it introduces landmark-based navigation using semantic information space maps - an approach that we are currently investigating. The technical part of the paper presents a system Knowledge Sea that implements this approach, describes the mechanism behind the system, and reports some results of a classroom evaluation of this system. | |||
| User Modelling and Adaptive Hypermedia Frameworks for Education (Technical Note) | | BIBA | 83-97 | |
| Mohamed Ramzy Zakaria; Tim Brailsford | |||
| In this paper, we give an overview about the hybrid model, which is a generic user model that is based on measuring and classifying users' knowledge with respect to multiple knowledge domains simultaneously. In addition, we demonstrate how that model is implemented through the WHURLE (World Hierarchal Universal Reactive Learning Environment) framework, which is an adaptive educational hypermedia framework where different domains could be involved at the same time to concurrently serve a wide range of users of different educational states and of different abilities and backgrounds. | |||
| Modelling Personalisable Hypermedia: The Goldsmiths Model | | BIBA | 99-137 | |
| James Ohene-Djan; Alvaro A. A. Fernandes | |||
| This paper addresses the issue of how hypermedia systems such as the WWW can
be endowed with features which allow the personalisation of the interaction
process between the hypermedia and the user. The approach taken is unique in
formally modelling a rich set of abstract user-initiated personalisation
actions which enable individual users to come closer to satisfying their
specific, and often dynamic, information retrieval goals.
The model proposed is descriptive, rather than prescriptive, and is cast at a level of abstraction above that of concrete systems exploring current technologies. Such an approach, it is hoped, will allow for user and system-initiated personalisation actions to be studied with greater conceptual clarity than is possible with technology-driven experimentation. This paper also describes the development of a personalisable hypermedia system called PAS. Developed at Goldsmiths College, University of London, PAS embodies the main concepts underlying the model proposed. | |||
| XConnector and XTemplate: Improving the Expressiveness and Reuse in Web Authoring Languages | | BIBA | 139-169 | |
| Debora Christina Muchaluat-Saade; Luiz Fernando Gomes Soares | |||
| Despite recent efforts made by the W3C, web-authoring languages still need to be enhanced. Aiming at this goal, this paper presents proposals for improving their expressiveness and reuse. The proposals are based on an XML language called XConnector, which provides the creation of complex referential and multimedia synchronization relations. XConnector can be used for improving the expressiveness of either linking languages, such as XLink, or linking modules of hypermedia authoring languages, such as XHTML or SMIL. The novel contribution of this paper is another XML language called XTemplate, which provides the creation of hypermedia composite templates. A composite template specifies types of components, types of relations, components and relationships that a hypermedia composition has or may have, without identifying who all components and relationships are. Templates are traditionally used for improving reuse. Composite templates allow the definition of common structures, which can be seen as representing types of compositions with specific semantics given by the set of defined relationships. Therefore, composite templates could be used to provide new time containers in web languages, besides the well-known par, seq and excl provided by SMIL 2.0. The paper also presents how composite templates are used in the HyperProp hypermedia system and proposes an extension to XLink to incorporate facilities provided by XConnector and XTemplate, improving its expressiveness and reuse. | |||
| Searching the Hypermedia Web: Improved Topic Distillation Through Network Analytic Relevance Ranking | | BIBA | 171-197 | |
| Behnak Yaltaghian; Mark Chignell | |||
| The Web is a large hypermedia space that is generally explored using search engines. These search engines are evolving to make more effective use of the hypermedia structure of the Web. This paper contributes to this evolution by proposing new methods of topic distillation in structured search based on co-citation and network analysis. We describe a set of 21 network analysis measures of relevance in Web search output. These measures are then compared with human judgments in two studies. In the first study, we compare the average judged relevance of the top 20 search results selected by Google vs. the top 20 results as selected by each of the 21 network analysis measures. All but one of the network analysis measures ("inlink") showed significantly (p<.05) better (as compared to Google) average judged relevance amongst their top 20 selections. Stepwise regression analysis was then used to identify a linear model with three network analysis measures as predictors, which accounted for roughly 17% of the variance in relevance judgments. In the second study the human judges compared ranked output from Google with the ranked output from the best fitting one- and three-predictor regression models. There was a tendency for people to prefer the ranked output from the three-predictor regression model. Only four of the 21 subjects made the Google output their first choice (out of the three options given to them). The output as ranked by the three-predictor model was also rated as having (within the top 20 ranked results) significantly more highly relevant results, and significantly fewer irrelevant results, than the corresponding ratings for Google. While these results need to be extended with more detailed analysis of a wide range of queries and topics, they suggest that network analysis of search output adjacency matrices (where adjacency/proximity is based on Web-wide co-citations) may significantly improve topic distillation by search engines. | |||
| Web Hypermedia Cost Estimation: Further Assessment and Comparison of Cost Estimation Modelling Techniques | | BIBA | 199-229 | |
| Emilia Mendes; Steve Counsell; Nile Mosley | |||
| Research into Web cost estimation is relatively new where few studies have
compared cost estimation modelling techniques for Web development, with an
emphasis placed on techniques such as Case-based Reasoning (CBR), linear and
stepwise regression. Although in a large subset of these studies CBR has given
the best predictions, results were based on a simple type of CBR, where no
adaptation rules were used to adjust the estimated effort obtained. In
addition, when comparing the prediction accuracy of estimation models, analysis
has been limited to a maximum of three training/validation sets, which
according to recent studies, may lead to untrustworthy results.
Since CBR is potentially easier to understand and apply (two important factors to the successful adoption of estimation methods within Web development Companies), it should be examined further. This paper has therefore two objectives: i) to further investigate the use of CBR for Web development effort prediction by comparing effort prediction accuracy of several CBR techniques; ii) to compare the effort prediction accuracy of the best CBR technique against stepwise and multiple linear regression, using twenty combinations of training/validation sets. Various measures of effort prediction accuracy were applied. One dataset was used in the estimation process. Stepwise and multiple linear regression showed the best prediction accuracy for the dataset employed. | |||
| From Information Retrieval to Hypertext Linking | | BIBA | 231-255 | |
| Peter J. Brown | |||
| There is a plethora of approaches to retrieval. At one extreme is a web search engine, which provides the user with complete freedom to search a collection of perhaps over a billion documents. At the opposite extreme is a web page where the author has supplied a small number of links to outside documents, chosen in advance by the author. Many practical retrieval needs lie between these two extremes. This paper aims to look at the multi-dimensional spectrum of retrieval methods; at the end, it presents some hypothetical tools, hopefully blueprints to the future, that aim to cover wider parts of the spectrum than current tools do. | |||