HCI Bibliography Home | HCI Conferences | HYPER Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HYPER Tables of Contents: 010203040506070809101112131415

Proceedings of the 22nd ACM Conference on Hypertext and Hypermedia

Fullname:Hypertext'11: Proceedings of the 22nd ACM Conference on Hypertext and Hypermedia
Editors:Paul De Bra; Kaj Grønbæk
Location:Eindhoven, Netherlands
Dates:2011-Jun-06 to 2011-Jun-09
Publisher:ACM
Standard No:ISBN: 1-4503-0256-4, 978-1-4503-0256-2; ACM DL: Table of Contents hcibib: HYPER11
Papers:38
Pages:336
Links:Conference Home Page
  1. Keynote-invited talks
  2. Track 1: dynamic and computed hypermedia
  3. Track 2: emerging structures and ubiquitous hypermedia
  4. Track 3: social media (linking people and things)
  5. Track 4: interaction, narrative, and storytelling

Keynote-invited talks

From disasters to WOW: using web science to understand and enable 21st century multidimensional networks BIBAFull-Text 1-2
  Noshir Contractor
Recent advances in Web Science provide comprehensive digital traces of social actions, interactions, and transactions. These data provide an unprecedented exploratorium to model the socio-technical motivations for creating, maintaining, dissolving, and reconstituting multidimensional social networks. Multidimensional networks include multiple types of nodes (people, documents, datasets, tags, etc.) and multiple types of relationships (co-authorship, citation, web links, etc). Using examples from research in a wide range of activities such as disaster response, science and engineering communities, public health and massively multiplayer online games (WoW -- the World of Warcraft), Contractor will argue that Web Science serves as the foundation for the development of social network theories and methods to help advance our ability to understand and enable multidimensional networks.
From hypertext to linked data: the ever evolving web BIBAFull-Text 3-4
  Wendy Hall
In this talk, we will reflect on the evolution of the Web. We will do this by analyzing the reasons why it became the first truly ubiquitous hypertext system against all competitors, and then by looking both at the way it has evolved from a network of linked documents to a system that facilitates social networking on a scale previously unimaginable, and at how it will evolve in the future as a network of linked data and beyond. The study of the Web -- its evolution and its impact on society, on business, and on government -- is referred to as Web science. We consider some of the major challenges of Web science and discuss possible Web worlds of the future.
Emerging trends in search user interfaces BIBAFull-Text 5-6
  Marti A. Hearst
What does the future hold for search user interfaces? Following on a recently completed book on this topic, this talk identifies some important trends in the use of information technology and suggest how these may affect search in future. This includes is a notable trend towards more "natural" user interfaces, a trend towards social rather than solo usage of information technology, and a trend in technology advancing the integration of massive quantities of user behavior and large-scale knowledge bases. These trends are, or will be, interweaving in various ways, which will have some interesting ramifications for search interfaces, and should suggest promising directions for research.

Track 1: dynamic and computed hypermedia

Implicit association via crowd-sourced coselection BIBAFull-Text 7-16
  Helen Ashman; Michael Antunovic; Satit Chaprasit; Gavin Smith; Mark Truran
The interaction of vast numbers of search engine users with sets of search results sets is a potential source of significant quantities of resource classification data. In this paper we discuss work which uses coselection data (i.e. multiple click-through events generated by the same user on a single search engine result page) as an indicator of mutual relevance between web resources and a means for the automatic clustering of sense-singular resources. The results indicate that coselection can be used in this way. We ground-truthed unambiguous query clustering, forming a foundation for work on automatic ambiguity detection based on the resulting number of generated clusters. Using the cluster overlap by population principle, the extension of previous work allowed determination of synonyms or lingual translations where overlapping clusters indicated the mutual relevance in coselection and subsequently the irrelevance of the actual label inherited from the user query.
Bridging link and query intent to enhance web search BIBAFull-Text 17-26
  Na Dai; Xiaoguang Qi; Brian D. Davison
Understanding query intent is essential to generating appropriate rankings for users. Existing methods have provided customized rankings to answer queries with different intent. While previous methods have shown improvement over their non-discriminating counterparts, the web authors' intent when creating a hyperlink is seldom taken into consideration. To mitigate this gap, we categorize hyperlinks into two types that are reasonably comparable to query intent, i.e., links describing the target page's identity and links describing the target page's content. We argue that emphasis on one type of link when ranking documents can benefit the retrieval for that type of query. We start by presenting a link intent classification approach based on the link context representations that captures evidence from anchors, target pages, and their associated links, and then introduce our enhanced retrieval model that incorporates link intent into the estimation of anchor text importance. Comparative experiments on two large scale web corpora demonstrate the efficacy of our approaches.
Beyond the usual suspects: context-aware revisitation support BIBAFull-Text 27-36
  Ricardo Kawase; George Papadakis; Eelco Herder; Wolfgang Nejdl
A considerable amount of our activities on the Web involves revisits to pages or sites. Reasons for revisiting include active monitoring of content, verification of information, regular use of online services, and reoccurring tasks. Browsers support for revisitation is mainly focused on frequently and recently visited pages. In this paper we present a dynamic browser toolbar that provides recommendations beyond these usual suspects, balancing diversity and relevance. The recommendation method used is a combination of ranking and propagation methods. Experimental outcomes show that this algorithm performs significantly better than the baseline method. Further experiments address the question whether it is more appropriate to recommend specific pages or rather (portal pages of) Web sites. We conducted two user studies with a dynamic toolbar that relies on our recommendation algorithm. In this context, the outcomes confirm that users appreciate and use the contextual recommendations provided by the toolbar.
Automatic mining of cognitive metadata using fuzzy inference BIBAFull-Text 37-46
  Melike Sah; Vincent Wade
Personalized search and browsing is increasingly vital especially for enterprises to able to reach their customers. Key challenge in supporting personalization is the need for rich metadata such as cognitive metadata about documents. As we consider size of large knowledge bases, manual annotation is not scalable and feasible. On the other hand, automatic mining of cognitive metadata is challenging since it is very difficult to understand underlying intellectual knowledge about documents automatically. To alleviate this problem, we introduce a novel metadata extraction framework, which is based on fuzzy information granulation and fuzzy inference system for automatic cognitive metadata mining. The user evaluation study shows that our approach provides reasonable precision rates for difficulty, interactivity type, and interactivity level on the examined 100 documents. In addition, proposed fuzzy inference system achieves improved results compared to a rule-based reasoner for document difficulty metadata extraction (11% improvement).
Personalised rating prediction for new users using latent factor models BIBAFull-Text 47-56
  Yanir Seroussi; Fabian Bohnert; Ingrid Zukerman
In recent years, personalised recommendations have gained importance in helping users deal with the abundance of information available online. Personalised recommendations are often based on rating predictions, and thus accurate rating prediction is essential for the generation of useful recommendations. Recently, rating prediction algorithms that are based on matrix factorisation have become increasingly popular, due to their high accuracy and scalability. However, these algorithms still deliver inaccurate rating predictions for new users, who submitted only a few ratings.
   In this paper, we address the new user problem by introducing several extensions to the basic matrix factorisation algorithm, which take user attributes into account when generating rating predictions. We consider both demographic attributes, explicitly supplied by users, and attributes inferred from user-generated texts. Our results show that employing our text-based user attributes yields personalised rating predictions that are more accurate than our baselines, while not requiring users to explicitly supply any information about themselves and their preferences.
Little search game: term network acquisition via a human computation game BIBAFull-Text 57-62
  Jakub Simko; Michal Tvarozek; Maria Bielikova
Semantic structures, ranging from ontologies to flat folksonomies, are widely used on the Web despite the fact that their creation in sufficient quality is often a costly task. We propose a new approach for acquiring a lightweight network of related terms via the Little Search Game -- a competitive browser game in search query formulation. The format of game queries forces players to express their perception of term relatedness. The term network is aggregated using "votes" from multiple players playing the same problem instance. We show that nearly 91% of the relationships produced by Little Search Game are correct and also elaborate on the game's unique ability to discover term relations, that are otherwise hidden to typical corpora mining methods.
GALE: a highly extensible adaptive hypermedia engine BIBAFull-Text 63-72
  David Smits; Paul De Bra
This paper presents GALE, the GRAPPLE Adaptive Learning Environment, which (contrary to what the word suggests) is a truly generic and general purpose adaptive hypermedia engine. Five years have passed since "The Design of AHA!" was published at ACM Hypertext (2006). GALE takes the notion of general-purpose a whole lot further. We solve shortcomings of existing adaptive systems in terms of genericity, extensibility and usability and show how GALE improves on the state of the art in all these aspects. We illustrate different authoring styles for GALE, including the use of template pages, and show how adaptation can be defined in a completely decentralized way by using the open corpus adaptation facility of GALE. GALE has been used in a number of adaptive hypermedia workshops and assignments to test whether authors can actually make use of the extensive functionality that GALE offers. Adaptation has been added to wiki sites, existing material e.g. from w3schools, and of course also to locally authored hypertext. Soon GALE will be used in cross-course adaptation at the TU/e in a pilot project to improve the success rate of university students.
Personalisation in the wild: providing personalisation across semantic, social and open-web resources BIBAFull-Text 73-82
  Ben Steichen; Alexander O'Connor; Vincent Wade
One of the key motivating factors for information providers to use personalization is to maximise the benefit to the user in accessing their content. However, traditionally such systems have focussed on mainly corporate or professionally authored content and have not been able to leverage the benefits of other material already on the web, written about that subject by other authors. Such information includes open-web information as well as user-generated content such as forums, blogs, tags, etc. By providing personalized compositions and presentations across these heterogeneous information sources, a potentially richer user experience can be created, leveraging the mutual benefits of professionally authored content as well as open-web information and active user communities. This paper presents novel techniques and architectures that extend the personalization reserved for corporate or professionally developed content with that of user generated content and pages in the wild. Complementary affordances of Personalized Information Retrieval and Adaptive Hypermedia are leveraged in order to provide Adaptive Retrieval and Composition of Heterogeneous INformation sources for personalized hypertext Generation (ARCHING). The approach enables adaptive selection and navigation according to multiple adaptation dimensions and across a variety of heterogeneous data sources. The architectures have been applied in a real-life personalized customer care scenario and a user study evaluation involving authentic information needs has been conducted. The evidence clearly shows that the system successfully blends a user's search experience with adaptive selection and navigation techniques and that the user experience is improved in terms of both task assistance and user satisfaction.
Evaluating significance of historical entities based on tempo-spatial impacts analysis using Wikipedia link structure BIBAFull-Text 83-92
  Yuku Takahashi; Hiroaki Ohshima; Mitsuo Yamamoto; Hirotoshi Iwasaki; Satoshi Oyama; Katsumi Tanaka
We propose a method to evaluate the significance of historical entities (people, events, and so on.). Here, the significance of a historical entity means how it affected other historical entities. Our proposed method first calculates the tempo-spacial impact of historical entities. The impact of a historical entity varies according to time and location. Historical entities are collected from Wikipedia. We assume that a Wikipedia link between historical entities represents an impact propagation. That is, when an entity has a link to another entity, we regard the former is influenced by the latter. Historical entities in Wikipedia usually have the date and location of their occurrence. Our proposed iteration algorithm propagates such initial tempo-spacial information through links in the similar manner as PageRank, so the tempo-spacial impact scores of all the historical entities can be calculated. We assume that a historical entity is significant if it influences many other entities that are far from it temporally or geographically. We demonstrate a prototype system and show the results of experiments that prove the effectiveness of our method.
Tags vs shelves: from social tagging to social classification BIBAFull-Text 93-102
  Arkaitz Zubiaga; Christian Körner; Markus Strohmaier
Recent research has shown that different tagging motivation and user behavior can effect the overall usefulness of social tagging systems for certain tasks. In this paper, we provide further evidence for this observation by demonstrating that tagging data obtained from certain types of users -- so-called Categorizers -- outperforms data from other users on a social classification task. We show that segmenting users based on their tagging behavior has significant impact on the performance of automated classification of tagged data by using (i) tagging data from two different social tagging systems, (ii) a Support Vector Machine as a classification mechanism and (iii) existing classification systems such as the Library of Congress Classification System as ground truth. Our results are relevant for scientists studying pragmatics and semantics of social tagging systems as well as for engineers interested in influencing emerging properties of deployed social tagging systems.

Track 2: emerging structures and ubiquitous hypermedia

Can we talk about spatial hypertext BIBAFull-Text 103-112
  Mark Bernstein
Spatial hypertexts are difficult to explain and to share because we have so little vocabulary with which to discuss them. From examination of actual spatial hypertexts drawn from a variety of domains and created in a variety of systems, we may identify and name several common patterns.
Many views, many modes, many tools & one structure BIBAFull-Text 113-122
  William Jones; Kenneth M. Anderson
People yearn for more integration of their information. But tools meant to help often do the opposite-pulling people and their information in different directions. Fragmentation is potentially worsened as personal information moves onto the Web and into a myriad of special-purpose, mobile-enabled applications. How can tool developers innovate "non-disruptively" in ways that do not force people to re-organize or re-locate their information? This paper makes two arguments: 1. An integration of personal information is not likely to happen through some new release of a desktop operating system or via a Web-based "super tool." 2. Instead, integration is best supported through the development of a standards-based infrastructure that makes provision for the shared manipulation of common structure by any number of tools, each in its own way. To illustrate this approach, the paper describes an XML-based schema, considerations in its design and its current use in three separate tools. The schema in its design and use builds on the lessons learned by the open hypermedia and structural computing communities while moving forward with new techniques that address some of the changes introduced by the evolution of the term "application" to move beyond desktop apps to mobile apps, cloud-based apps and various hybrid architectures.
Hypertext structures for investigative teams BIBAFull-Text 123-132
  Rasmus Rosenqvist Petersen; Uffe Kock Wiil
Investigations such as police investigations, intelligence analysis, and investigative journalism involves a number of complex knowledge management tasks. Investigative teams collect, process, and analyze information related to a specific target to create products that can be disseminated to their customers. This paper presents a novel hypertext-based tool that supports a human-centered, target-centric model for investigative teams. The model divides investigative tasks into five overall processes: acquisition, synthesis, sense-making, dissemination, and cooperation. The developed tool provides more comprehensive support for synthesis and sense-making tasks than existing tools.
An experience using a spatial hypertext Wiki BIBAFull-Text 133-142
  Carlos Solis; Nour Ali
Most wikis do not allow users to collaboratively organize relations among wiki pages, nor ways to visualize them because such relations are hard to express using hyperlinks. The Spatial Hypertext Wiki (ShyWiki) is a wiki that uses Spatial Hypertext to represent visual and spatial implicit relations. This paper reports an experience about the use of ShyWiki features and its spatial hypertext model. Four groups, consisting of 3 members each, were asked to use ShyWiki for creating, sharing and brainstorming knowledge during the design and documentation of a software architecture. We present the evaluation of a questionnaire that users answered about their perceived usefulness and easiness of use of the spatial and visual properties of ShyWiki, and several of its features. We have also asked the users if they would find the visual and spatial properties useful in a wiki such as Wikipedia. In addition, we have analyzed the visual and spatial structures used in the wiki pages, and which features have been used.
A generic approach for on-the-fly adding of context-aware features to existing websites BIBAFull-Text 143-152
  William Van Woensel; Sven Casteleyn; Olga De Troyer
More and more, mobile devices act as personal information managers and are able to obtain rich contextual information on the user's environment. Mobile, context-aware web applications can exploit this information to better address the needs of mobile users. Currently, such websites are either developed separately from their associated desktop-oriented version, or both versions are created simultaneously by employing methodologies that support multi-platform context-aware websites, requiring an extensive engineering effort. While these approaches provide a solution for developing new websites, they go past the plethora of existing websites. To address this issue, we present an approach for enhancing existing websites on-the-fly with context-aware features. We first discuss the requirements for such an adaptation process, and identify applicable adaptation methods to realize context-aware features. Next, we explain our generic approach, which is grounded in the use of semantic information extracted from existing websites. Finally, we present a concrete application of our approach that is based on the SCOUT framework for mobile and context-aware application development.

Track 3: social media (linking people and things)

Semantic similarity in heterogeneous ontologies BIBAFull-Text 153-160
  Elisa Chiabrando; Silvia Likavec; Ilaria Lombardi; Claudia Picardi; Daniele Theseider Dupré
Recent extensive usage of ontologies as knowledge bases that enable rigorous representation and reasoning over heterogenous data poses certain challenges in their construction and maintenance. Many of these ontologies are incomplete, containing many dense sub-ontologies. A need arises for a measure that would help calculate the similarity between the concepts in these kinds of ontologies. In this work, we introduce a new similarity measure for ontological concepts that takes these issues into account. It is based on conceptual specificity, which measures how much a certain concept is relevant in a given context, and on conceptual distance, which introduces different edge lengths in the ontology graph. We also address the problem of computing similarity between concepts in the presence of implicit classes in ontologies. The evaluation of our approach shows an improvement over Leacock and Chodorow's distance based measure. Finally, we provide two application domains which can benefit when this similarity measure is used.
Identifying relevant social media content: leveraging information diversity and user cognition BIBAFull-Text 161-170
  Munmun De Choudhury; Scott Counts; Mary Czerwinski
As users turn to large scale social media systems like Twitter for topic-based content exploration, they quickly face the issue that there may be hundreds of thousands of items matching any given topic they might query. Given the scale of the potential result sets, how does one identify the 'best' or 'right' set of items? We explore a solution that aligns characteristics of the information space, including specific content attributes and the information diversity of the results set, with measurements of human information processing, including engagement and recognition memory. Using Twitter as a test bed, we propose a greedy iterative clustering technique for selecting a set of items on a given topic that matches a specified level of diversity.
   In a user study, we show that our proposed method yields sets of items that were, on balance, more engaging, better remembered, and rated as more interesting and informative compared to baseline techniques. Additionally, diversity indeed seemed to be important to participants in the study in the consumption of content. However as a rather surprising result, we also observe that content was perceived to be more relevant when it was highly homogeneous or highly heterogeneous. In this light, implications for the selection and evaluation of topic-centric item sets in social media contexts are discussed.
All liaisons are dangerous when all your friends are known to us BIBAFull-Text 171-180
  Daniel Gayo Avello
Abstract Online Social Networks (OSNs) are used by millions of users worldwide. Academically speaking, there is little doubt about the usefulness of demographic studies conducted on OSNs and, hence, methods to label unknown users from small labeled samples are very useful. However, from the general public point of view, this can be a serious privacy concern. Thus, both topics are tackled in this paper: First, a new algorithm to perform user profiling in social networks is described, and its performance is reported and discussed. Secondly, the experiments -- conducted on information usually considered sensitive -- reveal that by just publicizing one's contacts privacy is at risk and, thus, measures to minimize privacy leaks due to social graph data mining are outlined.
Modeling the structure and evolution of discussion cascades BIBAFull-Text 181-190
  Vicenç Gómez; Hilbert J. Kappen; Andreas Kaltenbrunner
We analyze the structure and evolution of discussion cascades in four popular websites: Slashdot, Barrapunto, Meneame and Wikipedia. Despite the big heterogeneities between these sites, a preferential attachment (PA) model with bias to the root can capture the temporal evolution of the observed trees and many of their statistical properties, namely, probability distributions of the branching factors (degrees), subtree sizes and certain correlations. The parameters of the model are learned efficiently using a novel maximum likelihood estimation scheme for PA and provide a figurative interpretation about the communication habits and the resulting discussion cascades on the four different websites.
Reactive tags: associating behaviour to prescriptive tags BIBAFull-Text 191-200
  Jon Iturrioz; Oscar Díaz; Iker Azpeitia
Social tagging is one of the hallmarks of Web2.0. The most common role of tags is descriptive. However, tags are being used for other purposes such as to indicate some actions to be conducted on the resource (e.g. 'toread'). This work focuses on 'prescriptive tags' that have associated some implicit behaviour in the user's mind. So far, little support is given for the automation of this "implicit behaviour", more to the point, if this behaviour is outside the tagging site. This paper introduces the notion of 'reactive tags' as a means for tagging to impact sites other than the tagging site itself. The operational semantics of reactive tags is defined through event-condition-action rules. Events are the action of tagging. Conditions check for additional data. Finally, rule's actions might impact someone else's account in a different website. The specification of this behaviour semantics is hidden through a graphical interface that permits users with no programming background to easily associate 'reactions' to the act of tagging. A working system, TABASCO, is presented as proof of concept.
Co-authorship 2.0: patterns of collaboration in Wikipedia BIBAFull-Text 201-210
  David Laniado; Riccardo Tasso
The study of collaboration patterns in wikis can help shed light on the process of content creation by online communities. To turn a wiki's revision history into a collaboration network, we propose an algorithm that identifies as authors of a page the users who provided the most of its relevant content, measured in terms of quantity and of acceptance by the community. The scalability of this approach allows us to study the English Wikipedia community as a co-authorship network. We find evidence of the presence of a nucleus of very active contributors, who seem to spread over the whole wiki, and to interact preferentially with inexperienced users. The fundamental role played by this elite is witnessed by the growing centrality of sociometric stars in the network. Isolating the community active around a category, it is possible to study its specific dynamics and most influential authors.
Extracting the mesoscopic structure from heterogeneous systems BIBAFull-Text 211-220
  Xin Liu; Tsuyoshi Murata
Heterogeneous systems in nature are often characterized by the mesoscopic structure known as communities. In this paper, we propose a framework to address the problem of community detection in bipartite networks and tripartite hypernetworks, which are appropriate models for many heterogeneous systems. The most important advantage of our method is that it is competent for detecting both communities of one-to-one correspondence and communities of many-to-many correspondence, while state of the art techniques can only handle the former. We demonstrate this advantage and show other desired properties of our method through extensive experiments in both synthetic and real-world datasets.
Social networks of Wikipedia BIBAFull-Text 221-230
  Paolo Massa
Wikipedia, the free online encyclopedia anyone can edit, is a live social experiment: millions of individuals volunteer their knowledge and time to collective create it. It is hence interesting trying to understand how they do it. While most of the scholar attention focused on article pages, a less investigated share of activities happen on user talk pages, Wikipedia pages where a message can be left for the specific user. This public conversations can be studied from a Social Network Analysis perspective in order to highlight the structure of the "talk" network. In this paper we focus on this preliminary extraction step by proposing different algorithms. We then empirically validate the differences in the networks they generate on the Venetian Wikipedia with the real network of conversations extracted manually by coding every message left on all user talk pages. The comparisons show that both the algorithms and the manual process contain inaccuracies that are intrinsic in the freedom and unpredictability of Wikipedia syntax and practices. Nevertheless, a precise description of the involved issues allows to make informed decisions and to base empirical findings on reproducible evidence. Our goal is to lay the foundation for a solid computational sociology of wikis. For this reason we release the scripts encoding our algorithms as open source and also some datasets extracted out of Wikipedia conversations, in order to let other researchers replicate and improve our initial effort.
Social capital increases efficiency of collaboration among Wikipedia editors BIBAFull-Text 231-240
  Keiichi Nemoto; Peter Gloor; Robert Laubacher
In this study we measure the impact of pre-existing social capital on the efficiency of collaboration among Wikipedia editors. To construct a social network among Wikipedians we look to mutual interaction on the user talk pages of Wikipedia editors. As our data set, we analyze the communication networks associated with 3085 featured articles -- the articles of highest quality in the English Wikipedia, comparing it to the networks of 80154 articles of lower quality. As the metric to assess the quality of collaboration, we measure the time of quality promotion from when an article is started until it is promoted to featured article. The study finds that the higher pre-existing social capital of editors working on an article is, the faster the articles they work on reach higher quality status, such as featured articles. The more cohesive and more centralized the collaboration network, and the more network members were already collaborating before starting to work together on an article, the faster the article they work on will be promoted or featured.
Individual behavior and social influence in online social systems BIBAFull-Text 241-250
  Manos Papagelis; Vanessa Murdock; Roelof van Zwol
The capacity to collect and analyze the actions of individuals in online social systems at minute-by-minute time granularity offers new perspectives on collective human behavior research. Macroscopic analysis of massive datasets raises interesting observations of patterns in online social processes. But working at a large scale has its own limitations, since it typically doesn't allow for interpretations on a microscopic level. We examine how different types of individual behavior affect the decisions of friends in a network. We begin with the problem of detecting social influence in a social system. Then we investigate the causality between individual behavior and social influence by observing the diffusion of an innovation among social peers. Are more active users more influential? Are more credible users more influential? Bridging this gap and finding points where the macroscopic and microscopic worlds converge contributes to better interpretations of the mechanisms of spreading of ideas and behaviors in networks and offer design opportunities for online social systems.
A community question-answering refinement system BIBAFull-Text 251-260
  Maria Soledad Pera; Yiu-Kai Ng
Community Question Answering (CQA) websites, which archive millions of questions and answers created by CQA users to provide a rich resource of information that is missing at web search engines and QA websites, have become increasingly popular. Web users who search for answers to their questions at CQA websites, however, are often required to either (i) wait for days until other CQA users post answers to their questions which might even be incorrect, offensive, or spam, or (ii) deal with restricted answer sets created by CQA websites due to the exact-match constraint that is employed and imposed between archived questions and user-formulated questions. To automate and enhance the process of locating high-quality answers to a user's question Q at a CQA website, we introduce a CQA refinement system, called QAR. Given Q, QAR first retrieves a set of CQA questions QS that are the same as, or similar to, Q in terms of its specified information need. Thereafter, QAR selects as answers to Q the top-ranked answers (among the ones to the questions in QS) based on various similarity scores and the length of the answers. Empirical studies, which were conducted using questions provided by the Text Retrieval Conference (TREC) and Text Analysis Conference (TAC), in addition to more than four millions questions (and their corresponding answers) extracted from Yahoo! Answers, show that QAR is effective in locating archived answers, if they exist, that satisfy the information need specified in Q. We have further assessed the performance of QAR by comparing its question-matching and answer-ranking strategies with their Yahoo! Answers' counterparts and verified that QAR outperforms Yahoo! Answers in (i) locating the set of questions QS that have the highest degrees of similarity with Q and (ii) ranking archived answers to QS as answers to Q.
A3P: adaptive policy prediction for shared images over popular content sharing sites BIBAFull-Text 261-270
  Anna Cinzia Squicciarini; Smitha Sundareswaran; Dan Lin; Josh Wede
More and more people go online today and share their personal images using popular web services like Picasa. While enjoying the convenience brought by advanced technology, people also become aware of the privacy issues of data being shared. Recent studies have highlighted that people expect more tools to allow them to regain control over their privacy. In this work, we propose an Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. In particular, we examine the role of image content and metadata as possible indicators of users' privacy preferences. We propose a two-level image classification framework to obtain image categories which may be associated with similar policies. Then, we develop a policy prediction algorithm to automatically generate a policy for each newly uploaded image. Most importantly, the generated policy will follow the trend of the user's privacy concerns evolved with time. We have conducted an extensive user study and the results demonstrate effectiveness of our system with the prediction accuracy around 90%.
A transfer approach to detecting disease reporting events in blog social media BIBAFull-Text 271-280
  Avaré Stewart; Matthew Smith; Wolfgang Nejdl
Event-Based Epidemic Intelligence (e-EI) has arisen as a body of work which relies upon different forms of pattern recognition in order to detect the disease reporting events from unstructured text that is present on the Web. Current supervised approaches to e-EI suffer both from high initial and high maintenance costs, due to the need to manually label examples to train and update a classifier for detecting disease reporting events in dynamic information sources, such as blogs.
   In this paper, we propose a new method for the supervised detection of disease reporting events. We tackle the burden of manually labelling data and address the problems associated with building a supervised learner to classify frequently evolving, and variable blog content. We automatically classify outbreak reports to train a supervised learner, and the knowledge acquired from the learning process is then transferred to the task of classifying blogs. Our experiments show that with the automatic classification of training data, and the transfer approach, we achieve an overall precision of 92% and an accuracy of 78.20%.
Entity set expansion in opinion documents BIBAFull-Text 281-290
  Lei Zhang; Bing Liu
Opinion mining has been an active research area in recent years. The task is to extract opinions expressed on entities and their attributes. For example, the sentence, "I love the picture quality of Sony cameras," expresses a positive opinion on the picture quality attribute of Sony cameras. Sony is the entity. This paper focuses on mining entities (e.g., Sony). This is an important problem because without knowing the entity, the extracted opinion is of little use. The problem is similar to the classic named entity recognition problem. However, there is a major difference. In a typical opinion mining application, the user wants to find opinions on some competing entities, e.g., competing or relevant products. However, he/she often can only provide a few names as there are too many of them. The system has to find the rest from a corpus. This implies that the discovered entities must be of the same type/class. This is the set expansion problem. Classic methods for solving the problem are based on distributional similarity. However, we found this method is inaccurate. We then employ a learning-based method called Bayesian Sets. However, directly applying Bayesian Sets produces poor results. We then propose a more sophisticated way to use Bayesian Sets. This method, however, causes two major problems: entity ranking and feature sparseness. For entity ranking, we propose a re-ranking method to solve the problem. For feature sparseness, we propose two methods to re-weight features and to determine the quality of features. These methods help improve the mining results substantially. Additionally, like any learning algorithm, Bayesian Sets requires the user to engineer a set of features. We design some generic features based on part-of-speech tags of words for learning, which thus does not need to engineer features for each specific domain. Experimental results using 10 real-life datasets from diverse domains demonstrated the effectiveness of the proposed technique.

Track 4: interaction, narrative, and storytelling

An algorithm to generate engaging narratives through non-linearity BIBAFull-Text 291-298
  Vinay Chilukuri; Bipin Indurkhya
The order in which the events of a story are presented plays an important role in story-telling. In this paper, we present an algorithm that generates narratives of different presentation orders for a story by taking its plan representation and the desired amount of non-linearity as input. We use the principles of event-indexing model, a cognitive model of narrative comprehension, to generate narratives without affecting the ease of comprehension. We hypothesize that a narrative deviated from its chronological order and presented without affecting the ease of comprehension might lead to cognitive engagement. Empirical evaluation of the system was conducted to test this hypothesis along with the amount of non-linearity that could be introduced in a story without affecting the ease of comprehension.
Succinct summaries of narrative events using social networks BIBAFull-Text 299-304
  Bart de Goede; Maarten Marx; Arjan Nusselder; Justin van Wees
This paper addresses the following research aim: provide a useful but succinct summary of long narrative events involving the interaction of several speakers. The summary should enable users to navigate to specific parts of the event using hyperlinks.
   Our solution is based on a representation of the main actors of the event and their interactions as a social network. The solution is applicable to events in which these interactions are more or less formally structured and detectable. This includes theatre and radio plays, recordings of a scientific workshop, proceedings of parliament and meetings notes in general.
The victorian web and the victorian course wiki: comparing the educational effectiveness of identical assignments in web 1.0 and web 2.0 BIBAFull-Text 305-312
  George P. Landow
In September 2008, the author delivered a keynote at WikiSym2008, in Porto, Portugal entitled "When a Wiki is not a Wiki: Twenty Years of the Victorian Web" in which he argued that the 45,000 documents that then made up www.victorianweb.org function as a moderated wiki and that, therefore, Web 1.0 can function for educational purposes much as Web 2.0 -- and has done so for many years. Challenged to employ an actual wiki, Landow taught the same course with the same weekly student assignments in successive years (2009, 2010), the first using the website, the second a closed, password-protected wiki. After briefly describing the composition, history, and authorship of the Victorian Web, key parts of which have existed in multiple hypermedia environments since their creation in 1988 for the Brown University Intermedia project, it presents the assignment, explains its goals, and then sets forth the results of this experience, listing advantages and disadvantages of using the wiki for instructors, students, and the related website.
New plots for hypertext?: towards poetics of a hypertext node BIBAFull-Text 313-318
  Mariusz Pisarski
While the significance of hypertext links for the new ways of telling stories has been widely discussed, there has been not many debates about the very elements that are being connected: hypertext nodes. Apart from few exceptions, poetics of the link overshadows poetics of the node. My goal is to re-focus on a single node, or lexia, by introducing the concept of contextual regulation as the major force that shapes hypertext narrative units. Because many lexias must be capable of occurring in different contexts and at different stages of the unfolding story, several compromises have to be made on the level of language, style, plot and discourse. Each node, depending on its position and importance, has a varying level of connectivity and autonomy, which affects the global coherence of text.
   After focusing on relations between the notion of lexia (as a coherent and flexible unit) and the notion of kernel in narrative theory, an explanation of rules behind contextual regulation is presented, along with the basic typology of nodes. Then an attempt to enhance existing plot pools for hypertext fiction is undertaken. Several suggestions for the new plots, offered by the node-centered approach, are introduced.
Vladimir Nabokov's pale fire: the lost 'father of all hypertext demos'? BIBAFull-Text 319-324
  Simon Rowberry
In the mid-sixties, Ted Nelson worked at Brown University on an early hypertext system. In 1969, IBM wanted to show the system at a conference, and Nelson gained permission to use Vladimir Nabokov's highly unconventional and hypertextual novel, Pale Fire (1962) as a technical demonstration of hypertext's potential. Unfortunately, the idea was dismissed in favor of a more technical-looking presentation, and thus was never demonstrated publicly. This paper re-considers Pale Fire's position in hypertext history, and posits that if it was used in this early hypertext demonstration, it would have been the 'father of all hypertext demonstrations' to complement Douglas Engelbart's 'Mother of All Demos' in 1968. In order to demonstrate the significance of Pale Fire's hypertextuality and Nelson's ambitions to use it, this paper will explore its hypertextual structure, the implication thereof for the novel and evaluate its success as a hypertext compared to electronic systems.
Automatic generation of video narratives from shared UGC BIBAFull-Text 325-334
  Vilmos Zsombori; Michael Frantzis; Rodrigo Laiola Guimaraes; Marian Florin Ursu; Pablo Cesar; Ian Kegel; Roland Craigie; Dick C. A. Bulterman
This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video narrative techniques, implemented using Narrative Structure Language (NSL) and ShapeShifting Media, are employed to automatically generate movies recounting the event. Such movies are personalized according to the preferences expressed by each individual end-user, for each individual viewing. This paper describes our prototype narrative system, MyVideos, deployed as a web application, and reports on its evaluation for one specific use case: assembling stories of a school concert by parents, relatives and friends. The evaluations carried out through focus groups, interviews and field trials, in the Netherlands and UK, provided validating results and further insights into this approach.