HCI Bibliography Home | HCI Conferences | ICTIR Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ICTIR Tables of Contents: 091113

Proceedings of the 2013 Conference on Theory of Information Retrieval

Fullname:Proceedings of the 2013 Conference on the Theory of Information Retrieval
Editors:Oren Kurland; Donald Metzler; Christina Lioma; Birger Larsen; Peter Ingwersen
Location:Copenhagen, Denmark
Dates:2013-Sep-29 to 2013-Oct-02
Publisher:ACM
Standard No:ISBN: 978-1-4503-2107-5; ACM DL: Table of Contents; hcibib: ICTIR13
Papers:31
Links:Conference Website
  1. Tutorials
  2. Keynote Address
  3. Relevance Feedback (Long Papers)
  4. Evaluation (Long Papers)
  5. Panel
  6. Keynote Address
  7. Recommender Systems (Long Papers)
  8. Temporal & Thread Search (Long Papers)
  9. Context & Diversification (Short Papers)
  10. Keynote Address
  11. Ranking I (Long Papers)
  12. Ranking II (Short Papers)
  13. Posters

Tutorials

Quantum Mechanics and Information Retrieval: From Theory to Application BIBFull-Text 1
  Massimo Melucci; Benjamin Piwowarski
Statistical Significance Testing in Information Retrieval: Theory and Practice BIBAFull-Text 2
  Ben Carterette
The past 20 years have seen a great improvement in the rigor of information retrieval experimentation, due primarily to two factors: high-quality, public, portable test collections such as those produced by TREC (the Text REtrieval Conference [2]), and the increased practice of statistical hypothesis testing to determine whether measured improvements can be ascribed to something other than random chance. Together these create a very useful standard for reviewers, program committees, and journal editors; work in information retrieval (IR) increasingly cannot be published unless it has been evaluated using a well-constructed test collection and shown to produce a statistically significant improvement over a good baseline.
   But, as the saying goes, any tool sharp enough to be useful is also sharp enough to be dangerous. Statistical tests of significance are widely misunderstood. Most researchers treat them as a "black box": evaluation results go in and a p-value comes out. Because significance is such an important factor in determining what research directions to explore and what is published, using p-values obtained without thought can have consequences for everyone doing research in IR. Ioannidis has argued that the main consequence in the biomedical sciences is that most published research findings are false [1]; could that be the case in IR as well?
Axiomatic Analysis and Optimization of Information Retrieval Models BIBAFull-Text 3
  ChengXiang Zhai; Hui Fang
The accuracy of a search engine is mostly determined by the optimality of the retrieval model used in the search engine. Developing optimal retrieval models has always been a very important fundamental research problem in information retrieval because an improved general retrieval model would enable all search engines to be more useful, thus have immediate broad impact. Extensive research has been done on developing an optimal retrieval model since 1960s, leading to multiple effective retrieval models, including, e.g., Pivoted Normalization Vector Space model, BM25, Dirichlet Prior Query Likelihood, and PL2. However, these state of the art retrieval models were all developed at least a decade ago, suggesting that it has been difficult to further improve them. One reason why we could not easily improve these models is because we do not have a good understanding of their deficiencies and have mostly relied on empirical evaluation to assess the superiority of a retrieval model.
   Recently, an axiomatic way of analyzing and optimizing retrieval models has been developed and shown great promise in both understanding the deficiencies of retrieval models and developing more effective ones. The basic idea of this axiomatic framework is to specify a number of formal constraints that an optimal retrieval model is expected to satisfy, and use them to assess the optimality of a retrieval model. Such an axiomatic way of modeling relevance provides a theoretical way to study how to develop an ultimately optimal retrieval model, enables analytical comparison of different retrieval models without necessarily requiring empirical evaluation, and has led to the development of multiple more effective retrieval models.
   The purpose of this tutorial is to systematically explain this emerging axiomatic approach to developing optimal retrieval models, review and summarize the research progress achieved so far on this topic, and discuss promising future research directions in optimizing general retrieval models. Tutorial attendees can expect to learn, among others, (1) the basic methodology of axiomatic analysis and optimization of retrieval models, (2) how to formalize retrieval heuristics with mathematical constraints, (3) the major retrieval constraints proposed so far, (4) the new retrieval functions derived by using the axiomatic approaches, (5) specific research directions to further develop more effective retrieval models, and (6) general open challenges in developing an ultimately optimal retrieval model. The tutorial should appeal to those who work on information retrieval models and those who are interested in applying axiomatic analysis to optimize specific retrieval functions in real applications. The tutorial should also be interesting to researchers who work on ranking problems in general. Attendees will be assumed to know the basic concepts in information retrieval models.
IR Models: Foundations and Relationships BIBAFull-Text 4
  Thomas Roelleke
IR models form a core part of IR research. This tutorial consolidates the foundations of IR models, and highlights relationships that help to better understand IR models. The first part of the tutorial reviews the state-of-the-art, and the second part shows insights into the relationships between TF-IDF, the Probability of Relevance Framework (PRF), BM25, language modelling (LM), probabilistic inference networks (PIN's), and Divergence-from-Randomness (DFR).

Keynote Address

Is There Space for Theory in Modern Commercial Search Engines? BIBAFull-Text 5
  Ricardo Baeza-Yates
In this invited talk we strip current Web search engines to enumerate the main problems where theory may be still useful for improving the quality and performance of them. The problems cover all the main processes involved in a search engine: crawling, indexing, searching, and ranking. We also explore problems related to the user experience where theory may also be relevant.

Relevance Feedback (Long Papers)

A Theoretical Analysis of Pseudo-Relevance Feedback Models BIBAFull-Text 6
  Stéphane Clinchant; Eric Gaussier
Our goal in this study is to compare several widely used pseudo-relevance feedback (PRF) models and understand what explains their respective behavior. To do so, we first analyze how different PRF models behave through the characteristics of the terms they select and through their performance on two widely used test collections. This analysis reveals that several well-known models surprisingly tend to select very common terms, with low IDF (inverse document frequency). We then introduce several conditions PRF models should satisfy regarding both the terms they select and the way they weigh them, prior to study whether standard PRF models satisfy these conditions or not. This study reveals that most models are deficient with respect to at least one condition, and that this deficiency explains the results of our analysis of the behavior of the models, as well as some of the results reported on the respective performance of PRF models. Based on the PRF conditions, we finally propose possible corrections for the simple mixture model. The PRF models obtained after these corrections outperform their standard version and yield state-of-the-art PRF models which confirms the validity of our theoretical analysis.
Query-Performance Prediction Using Minimal Relevance Feedback BIBAFull-Text 7
  Olga Butman; Anna Shtok; Oren Kurland; David Carmel
There has been much work on devising query-performance prediction approaches that estimate search effectiveness without relevance judgments (i.e., zero feedback). Specifically, post-retrieval predictors analyze the result list of top-retrieved documents. Departing from the zero-feedback approach, in this paper we show that relevance feedback for even very few top ranked documents can be exploited to dramatically improve prediction quality. Specifically, applying state-of-the-art zero-feedback-based predictors to only a very few relevant documents, rather than to the entire result list as originally designed, substantially improves prediction quality. This novel form of prediction is based on quantifying properties of relevant documents that can attest to query performance. We also show that integrating prediction based on relevant documents with zero-feedback-based prediction is highly effective; specifically, with respect to utilizing state-of-the-art direct estimates of retrieval effectiveness when minimal feedback is available.

Evaluation (Long Papers)

Axiometrics: An Axiomatic Approach to Information Retrieval Effectiveness Metrics BIBAFull-Text 8
  Luca Busin; Stefano Mizzaro
The evaluation of retrieval effectiveness has played and is playing a central role in Information Retrieval (IR). A specific issue is that there are literally dozens (most likely more than one hundred) IR effectiveness metrics, and counting.
   In this paper we propose an axiomatic approach to IR effectiveness metrics. We build on the notions of measure, measurement, and similarity; they allow us to provide a general definition of IR effectiveness metric. On this basis, we then propose and justify some axioms that every effectiveness metric should satisfy, and we derive some theorems from the axioms. We also discuss some future developments.
On Using Fewer Topics in Information Retrieval Evaluations BIBAFull-Text 9
  Andrea Berto; Stefano Mizzaro; Stephen Robertson
The possibility of using fewer topics in TREC, and in TREC-like initiatives, has been studied recently, with encouraging results: even when decreasing consistently the number of topics (for example, using a topic subset of cardinality only 10, in place of the usual 50) it is possible, at least potentially, to obtain similar results when evaluating system effectiveness. However, the generality of this approach has been questioned, since the topic subset selected on one system population does not seem adequate to evaluate other systems. In this paper we reconsider that generality issue: we emphasize some limitations in the previous work and we show some experimental results that are instead more positive. The obtained results support the hypothesis that, by taking special care, the few topics selected on the basis of a given system population are also adequate to evaluate a different system population as well.

Panel

IR Research: Challenges and Long-range Opportunities BIBAFull-Text 10
  Peter Ingwersen
The field of Information Retrieval (IR) is under steady development and change. Information collections have become larger and more diversified; elaborated IT platforms penetrate most life situations; computers have become more powerful; networks and devices more widespread; social media and information interaction in many forms are increasingly used in daily life, etc. Simultaneously, the information experiences of users and use of information has become increasingly differentiated. IR is well-integrated into all kinds of mobile, transactional and traditional information systems dealing with most kinds of media.
   Since 2002 several workshops have been held to discuss the future lines of action in IR research and development -- the so-called SWIRL workshops. For an update of results see SIGIR Forum vol. 46 (1), 2012, p. 2-32. During the last SWIRL the following issues were discussed: Not just ranked lists; Help for users; Capturing context; Information -- not documents; Domains (novel non-textual IT-driven areas of IR research); Evaluation.
   Since ICTIR 2013 fundamentally concerns IR theory the objectives of the ICTIR 2013 panel are to pinpoint central theoretical short and long-term pathways, challenges, opportunities and consequences for IR research. The following themes are suggested but not limited to be approached during the panel from predominantly theoretical perspectives:
   1. How far can we push ranking models, such as language models, quantum, or learning to rank -- and to what end?
   2. Information interaction, user experiences, context and relevance -- where do we stand -- what is amiss?
   3. Personalization issues -- should user models be socially constructed or real-time dependent?
   4. Social media in integration -- useful to IR?
   5. Non-textual media -- which models are scientifically sound?
   6. Small and medium-sized integrated collections -- enterprise IR: where do we stand?
   7. Evaluation and metrics -- theory-driven or ad-hoc?
   Some of the themes can be combined in response to the questions, for instance, evaluation methodologies for integrated information systems, and sub-themes can be evoked, e.g. tasks, authority or importance vs. relevance and relevance feedback for information interaction. Remember that IR theory is not just a mathematical-logical enterprise but includes conceptual models, frameworks and perspectives as well.

Keynote Address

Compositional Vector Semantics Over Different Ground Fields BIBAFull-Text 11
  Dominic Widdows
For many years, vector space models have been used in information retrieval and computational linguistics to represent terms, queries, and documents, using vector addition as a simple operator to model semantic composition. Though surprisingly successful, many aspects of meaning including word order, typed relationships, and nested structures are not captured by this modelling process.
   In recent years, this has changed dramatically. Several researchers have had considerable success at representing other semantic operations in vector models, including negation, typed relationships, distributed inference, adjective-noun modification, and nested composition. This success is partly due to the ready availability of established algebraic methods including orthogonal projection, tensor algebra and matrix multiplication, circular convolution, and permutation.
   When applied to vectors with complex or binary numbers as coordinates, these operations, their implementations, and experimental results sometimes differ markedly from those obtained with real numbers as coordinates. This brings our attention to a surprising gap in information retrieval and indeed machine learning: in these rapidly developing empirical fields, we tend to tacitly assume that real numbers are the canonical ground field. This is in marked contrast to physics, where complex numbers are ubiquitous, and logic, where binary numbers are the established starting point.
   In this talk, we will review some of the algebraic operators used today for modelling composition of meaning with vectors, and compare their implementations and behaviours when using different number fields for the vector coordinates. The main goal is to encourage theoretical and practical researchers in information retrieval to experiment much more with complex and binary vectors as well as real vectors, in the hope that such investigations may prove as fruitful for information retrieval as they have been for physics and logic.

Recommender Systems (Long Papers)

Enhanced Information Retrieval by Exploiting Recommender Techniques in Cluster-Based Link Analysis BIBAFull-Text 12
  Wei Li; Gareth G. F. Jones
Inspired by the use of PageRank algorithms in document ranking, we develop and evaluate a cluster-based PageRank algorithm to re-rank information retrieval (IR) output with the objective of improving ad hoc search effectiveness. Unlike existing work, our methods exploit recommender techniques to extract the correlation between documents and apply detected correlations in a cluster-based PageRank algorithm to compute the importance of each document in a dataset. In this study two popular recommender techniques are examined in four proposed PageRank models to investigate the effectiveness of our approach. Comparison of our methods with strong baselines demonstrates the solid performance of our approach. Experimental results are reported on an extended version of the FIRE 2011 personal information retrieval (PIR) data collection which includes topically related queries with click-through data and relevance assessment data collected from the query creators. The search logs of the query creators are categorized based on their different topical interests. The experimental results show the significant improvement of our approach compared to results using standard IR and cluster-based PageRank methods.
Understanding Similarity Metrics in Neighbour-based Recommender Systems BIBAFull-Text 13
  Alejandro Bellogín; Arjen P. de Vries
Neighbour-based collaborative filtering is a recommendation technique that provides meaningful and, usually, accurate recommendations. The method's success depends however critically upon the similarity metric used to find the most similar users (neighbours), the basis of the predictions made. In this paper, we explore twelve features that aim to explain why some user similarity metrics perform better than others. Specifically, we define two sets of features, a first one based on statistics computed over the distance distribution in the neighbourhood, and, a second one based on the nearest neighbour graph. Our experiments with a public dataset show that some of these features are able to correlate with the performance up to a 90%.

Temporal & Thread Search (Long Papers)

Information Retrieval with Time Series Query BIBAFull-Text 14
  Hyun Duk Kim; Danila Nikitin; ChengXiang Zhai; Malu Castellanos; Meichun Hsu
We study a novel information retrieval problem, where the query is a time series for a given time period, and the retrieval task is to find relevant documents in a text collection of the same time period, which contain topics that are correlated with the query time series. This retrieval problem arises in many text mining applications where there is a need to analyze text data in order to discover potentially causal topics. To solve this problem, we propose and study multiple retrieval algorithms that use the general idea of ranking text documents based on how well their terms are correlated with the query time series. Experiment results show that the proposed retrieval algorithm can effectively help users find documents that are relevant to the time series queries, which can help users analyze the variation patterns of the time series.
Exploiting Forum Thread Structures to Improve Thread Clustering BIBAFull-Text 15
  Kumaresh Pattabiraman; Parikshit Sondhi; ChengXiang Zhai
Automated clustering of threads within and across web forums will greatly benefit both users and forum administrators in efficiently seeking, managing, and integrating the huge volume of content being generated. While clustering has been studied for other types of data, little work has been done on clustering forum threads; the informal nature and special structure of forum data make it interesting to study how to effectively cluster forum threads. In this paper, we apply three state of the art clustering methods (i.e., hierarchical agglomerative clustering, k-Means, and probabilistic latent semantic analysis) to cluster forum threads and study how to leverage the structure of threads to improve clustering accuracy. We propose three different methods for assigning weights to the posts in a forum thread to achieve more accurate representation of a thread. We evaluate all the methods on data collected from three different Linux forums for both within-forum and across-forum clustering. Our results show that the state of the art methods perform reasonably well for this task, but the performance can be further improved by exploiting thread structures. In particular, a parabolic weighting method that assigns higher weights for both beginning posts and end posts of a thread is shown to consistently outperform a standard clustering method.

Context & Diversification (Short Papers)

Tie Breaker: A Novel Way of Combining Retrieval Signals BIBAFull-Text 16
  Hao Wu; Hui Fang
Empirical studies of information retrieval suggest that the effectiveness of a retrieval function is closely related to how it combines multiple retrieval signals including term frequency, inverse document frequency and document length. Although it is relatively easy to capture how each signal contributes to the relevance scores, it is more challenging to find the best way of combining these signals since they often interact with each other in a complicated way. As a result, when deriving a retrieval function from traditional retrieval models, the choice of one implementation over the others was often made based on empirical observations rather than sound theoretical derivations.
   In this paper, we propose a novel way of combining retrieval signals to derive robust retrieval functions. Instead of seeking an integrated way of combining these signals into a complex mathematical retrieval function, our main idea is to prioritize the retrieval signals, apply the strongest signal first to rank documents, and then iteratively use the weaker signals to break the ties of the documents with the same scores. One unique advantage of our method is that it eliminates the need of having complicated implementation of the signals and enables a simple yet elegant way of combining the multiple signals for document ranking. Empirical results show that the proposed method can achieve comparable performance as the state of art retrieval functions over traditional TREC ad hoc retrieval collections, and can outperform them over TREC microblog retrieval collections.
A Diagnostic Study of Search Result Diversification Methods BIBAFull-Text 17
  Wei Zheng; Hui Fang
Search result diversification aims to maximize the coverage of different pieces of relevant information in the search results. Many diversification methods have been proposed and studied. However, the advantage and disadvantage of each method still remain unclear. In this paper, we conduct a diagnostic study over two state of the art diversification methods with the goal of identifying the weaknesses of these methods to further improve the performance. Specifically, we design a set of perturbation tests that isolate individual factors, i.e., relevance and diversity, which affect the diversification performance. The test results are expected to provide insights on how well each method deals with these factors in the diversification process. Experimental results suggest that some methods perform better in queries whose originally retrieved documents are more relevant to the query while other methods perform better when the documents are more diversified. We therefore propose methods to combine these existing methods based on the predicted factor of the query. The experimental results show that the combined methods can outperform individual methods on TREC collections.
Opinion-based User Profile Modeling for Contextual Suggestions BIBAFull-Text 18
  Peilin Yang; Hui Fang
The problem of contextual suggestion is defined as finding suggested places for a user based on the temporal and geographical context of the user as well as the user's preferences on example places. Existing studies models user preferences based on the descriptive information about the suggestions and might not generalize well. In this paper, we propose to model user profiles based on the opinions about the candidate suggestions. Instead of simply building a profile about "what a user likes or dislikes", we want to build the profile based on "why a user likes or dislikes" so that we can make a more accurate prediction on whether a user would like a new candidate suggestion. In particular, we propose to leverage the opinions from the comments posted by other users to estimate a user's profile. The basic assumption is that the reason why a user likes or dislikes a place is likely to be covered by the reviews posted by other users who share the similar opinions as the user. Experiments results over a TREC collection show that the proposed opinion-based user modeling can indeed outperform the existing description-based methods.

Keynote Address

The Roots of the Theoretical Basis for Information Retrieval BIBAFull-Text 19
  Keith van Rijsbergen
Over many years a number of theoretical constructs and models have evolved for the study of information retrieval in its various manifestations. Gradually these have settled down into a small number of generally accepted techniques. Some of these are based on logic, probability theory, or vector space theory. In my talk I will go back in time and show how these techniques took shape, and how they were based on some standard theories in formal logic, decision theory, and linear algebra. The struggle to establish these techniques is in itself of interest, especially in relation to the need to deploy a sensible theory of measurement to quantify retrieval effectiveness. In discussing this earlier work I will emphasise the research of some of the early pioneers in our field: Fairthorne, Maron, Cooper, Cleverdon, Salton and Sparck Jones.

Ranking I (Long Papers)

Modelling Score Distributions Without Actual Scores BIBAFull-Text 20
  Stephen Robertson; Evangelos Kanoulas; Emine Yilmaz
Score-distribution models are used for various practical purposes in search, for example for results merging and threshold setting. In this paper, the basic ideas of the score-distributional approach to viewing and analysing the effectiveness of search systems are re-examined. All recent score-distribution modelling work depends on the availability of actual scores generated by systems, and makes assumptions about these scores. Such work is therefore not applicable to systems which do not generate or reveal such scores, or whose scoring/ranking approach violates the assumptions. We demonstrate that it is possible to apply at least some score-distributional ideas without access to real scores, knowing only the rankings produced (together with a single effectiveness metric based on relevance judgements). This new basic insight is illustrated by means of simulation experiments, on a range of TREC runs, some of whose reported scores are clearly unsuitable for existing methods.
Revisiting Exhaustivity and Specificity Using Propositional Logic and Lattice Theory BIBAFull-Text 21
  Karam Abdulahhad; Jean-Pierre Chevallet; Catherine Berrut
Exhaustivity and Specificity in logical Information Retrieval framework were introduced by Nie [16]. However, even with some attempts, they are still theoretical notions without a clear idea of how to be implemented. In this study, we present a new approach to deal with them. We use propositional logic and lattice theory in order to redefine the two implications and their uncertainty P(d → q) and P(q → d). We also show how to integrate the two notions into a concrete IR model for building a new effective model. Our proposal is validated against six corpora, and using two types of terms (words and concepts). The experimental results showed the validity of our viewpoint, which state: the explicit integration of Exhaustivity and Specificity into IR models will improve the retrieval performance of these models. Moreover, there should be a type of balance between the two notions.
Efficient Nearest-Neighbor Search in the Probability Simplex BIBAFull-Text 22
  Kriste Krstovski; David A. Smith; Hanna M. Wallach; Andrew McGregor
Document similarity tasks arise in many areas of information retrieval and natural language processing. A fundamental question when comparing documents is which representation to use. Topic models, which have served as versatile tools for exploratory data analysis and visualization, represent documents as probability distributions over latent topics. Systems comparing topic distributions thus use measures of probability divergence such as Kullback-Leibler, Jensen-Shannon, or Hellinger. This paper presents novel analysis and applications of the reduction of Hellinger divergence to Euclidean distance computations. This reduction allows us to exploit fast approximate nearest-neighbor (NN) techniques, such as locality-sensitive hashing (LSH) and approximate search in k-d trees, for search in the probability simplex. We demonstrate the effectiveness and efficiency of this approach on two tasks using latent Dirichlet allocation (LDA) document representations: discovering relationships between National Institutes of Health (NIH) grants and prior-art retrieval for patents. Evaluation on these tasks and on synthetic data shows that both Euclidean LSH and approximate k-d tree search perform well when a single nearest neighbor must be found. When a larger set of similar documents is to be retrieved, the k-d tree approach is more effective and efficient.

Ranking II (Short Papers)

A New Probabilistic Ranking Model BIBAFull-Text 23
  Richard Connor; Robert Moss; Morgan Harvey
Over the years a number of models have been introduced as solutions to the central IR problem of ranking documents given textual queries. Here we define another new model. It is a probabilistic model and has no term inter-dependencies, thus allowing calculation from inverted indices. It is based upon a simple core hypothesis, directly calculating a ranking score in terms of probability theory. Early results show that its performance is credible, even in the absence of parameters or heuristics. Its semantic basis gives absolute results, allowing different rankings to be compared with each other. The investigation of this model is at a very early stage; here, we simply propose the model for further investigation.
A Standard Document Score for Information Retrieval BIBAFull-Text 24
  Ronan Cummins
In this paper we propose a standard document retrieval score based on term-frequencies. We model the within-document term-frequency aspect of each term as a random variable. The standard score is then used to transform each random variable to a regularised form so that they can be effectively combined for use as a standard document score. The standardisation used imposes no constraints on the choice of probability distribution for the term-frequencies.
   We show that the standardisation automatically creates a measure of term-specificity. Analysis shows that this measure is highly correlated with the traditional idf measure, and furthermore suggests a novel interpretation and justification of idf-like measures. With experiments on a number of different TREC collections, we show that the standard document score model is comparable with BM25. However, we show that an advantage of the standard document score model is that the document scores output from the model are dimensionless quantities, and therefore are comparable across different queries and collections in certain circumstances.
Textual Similarity with a Bag-of-Embedded-Words Model BIBAFull-Text 25
  Stéphane Clinchant; Florent Perronnin
While words in documents are generally treated as discrete entities, they can be embedded in a Euclidean space which reflects an a priori notion of similarity between them. In such a case, a text document can be viewed as a bag-of-embedded-words (BoEW): a set of real-valued vectors. We propose a novel document representation based on such continuous word embeddings. It consists in non-linearly mapping the word-embeddings in a higher-dimensional space and in aggregating them into a document-level representation. We report retrieval experiments in the case where the word-embeddings are computed from standard topic models showing significant improvements with respect to the original topic models.
Statistical Translation Language Model for Twitter Search BIBAFull-Text 26
  Maryam Karimzadehgan; ChengXiang Zhai; Miles Efron
With the prevalence of social media applications, an increasing number of internet users are actively publishing text information on-line. This influx provides a wealth of text information on those users. Ranking in social media poses different challenges than Web search ranking, one of which is that Microblog messages are really short. As a result, the vocabulary mismatch problem is exacerbated in social media search. In this paper, we first study the standard translation model for this problem and reveal that translation language model not only helps to bridge the vocabulary gap but also improves the estimate of Term Frequency. We further propose two ways to improve translation language model through leveraging Hashtag information and adaptively setting the self-translation parameter. Experimental results on Twitter data set show that our proposed methods are effective.

Posters

More Than Words: A Review of Planets, Stars and Sample Spaces BIBAFull-Text 27
  Emanuele Di Buccio; Giorgio Maria Di Nunzio
This work discusses the consequences of choosing a sample space based on the interpretation of an experiment. We discuss the paradoxes that have been extensively studied in literature, and then we propose an alternative interpretation of the problem of document classification.
A Visual Analysis of the Effects of Assumptions of Classical Probabilistic Models BIBAFull-Text 28
  Emanuele Di Buccio; Giorgio Maria Di Nunzio
This poster discusses the main assumptions of classical probabilistic models in IR by means of a visual data analysis approach. Starting from the problem of classification of documents into relevant and non relevant classes, we derive the exact same formula of the relevance weight of the Binary Independence Model but with more degrees of interaction. With this approach, new factors can be taken into account to obtain a different ranking of the documents.
Information Retrieval for Temporal Bounding BIBAFull-Text 29
  Leon Derczynski; Robert Gaizauskas
The temporal bounding problem is that of finding the beginning and ending times of a temporal interval during which an assertion holds. Existing approaches to temporal bounding have assumed the provision of a reference document from which to extract temporal bounds. We argue that a real-world setting does not include a reference document and that an information retrieval step is often required in order to locate documents containing candidate beginning and end times. We call this task "Information Retrieval for Temporal Bounding". This paper defines the task and discusses suitable evaluation metrics, as well as demonstrating the task's difficulty using a reference dataset.
Mathematical Specification and Logic Modelling in the context of IR BIBAFull-Text 30
  Miguel Martinez-Alvarez; Marco Bonzanini; Thomas Roelleke
Many IR models and tasks rely on a mathematical specification, and, in order to check its correctness, extensive testing and manual inspection is usually carried out. However, a formal guarantee can be particularly difficult, or even impossible, to provide.
   This poster highlights the relationship between the mathematical specification of IR algorithms and their modelling, using a logic-based abstraction that minimises the gap between the specification and a concrete implementation. As a result, the semantics of the program are well-defined and correctness checks can be applied. This methodology is illustrated with the mathematical specification, and logic modelling of a Bayesian classifier with Laplace smoothing. In addition to closing the gap between specification and modelling, and the fact that checking the correctness of a model's implementation becomes an inherent part of the design process, this work can lead to the automatic translation between the mathematical definition and its modelling.
A Modification of LambdaMART to Handle Noisy Crowdsourced Assessments BIBAFull-Text 31
  Pavel Metrikov; Jie Wu; Jesse Anderton; Virgil Pavlu; Javed A. Aslam
We consider noisy crowdsourced assessments and their impact on learning-to-rank algorithms. Starting with EM-weighted assessments, we modify LambdaMART in order to use smoothed probabilistic preferences over pairs of documents, directly as input to the ranking algorithm.