HCI Bibliography Home | HCI Conferences | CLEF Archive | Detailed Records | RefWorks | EndNote | Show Abstracts
CLEF Tables of Contents: 101112131415

CLEF 2010: International Conference of the Cross-Language Evaluation Forum

Fullname:CLEF 2010: Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum
Editors:Maristella Agosti; Nicola Ferro; Carol Peters; Maarten de Rijke; Alan Smeaton
Location:Padua, Italy
Dates:2010-Sep-20 to 2010-Sep-23
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 6360
Standard No:DOI: 10.1007/978-3-642-15998-5; ISBN: 978-3-642-15997-8 (print), 978-3-642-15998-5 (online); hcibib: CLEF10
Papers:16
Pages:144
Links:Online Proceedings | DBLP Contents | Online Working Notes
  1. Keynote Addresses
  2. Resources, Tools, and Methods
  3. Experimental Collections and Datasets (1)
  4. Experimental Collections and Datasets (2)
  5. Evaluation Methodologies and Metrics (1)
  6. Evaluation Methodologies and Metrics (2)
  7. Panels

Keynote Addresses

IR between Science and Engineering, and the Role of Experimentation BIBAFull-Text 1
  Norbert Fuhr
Retrieval Evaluation in Practice BIBAFull-Text 2
  Ricardo A. Baeza-Yates

Resources, Tools, and Methods

A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages BIBAFull-Text 3-14
  Aki Loponen; Kalervo Järvelin
A New Approach for Cross-Language Plagiarism Analysis BIBAFull-Text 15-26
  Rafael Corezola Pereira; Viviane Pereira Moreira; Renata Galante
Creating a Persian-English Comparable Corpus BIBAFull-Text 27-39
  Homa Baradaran Hashemi; Azadeh Shakery; Heshaam Feili

Experimental Collections and Datasets (1)

Validating Query Simulators: An Experiment Using Commercial Searches and Purchases BIBAFull-Text 40-51
  Bouke Huurnink; Katja Hofmann; Maarten de Rijke; Marc Bron
Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation BIBAFull-Text 52-63
  Marco Turchi; Josef Steinberger; Mijail Alexandrov Kabadjov; Ralf Steinberger

Experimental Collections and Datasets (2)

MapReduce for Information Retrieval Evaluation: "Let's Quickly Test This on 12 TB of Data" BIBAFull-Text 64-69
  Djoerd Hiemstra; Claudia Hauff
Which Log for Which Information? Gathering Multilingual Data from Different Log File Types BIBAFull-Text 70-81
  Maria Gäde; Vivien Petras; Juliane Stiller

Evaluation Methodologies and Metrics (1)

Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements BIBAFull-Text 82-93
  Walid Magdy; Gareth J. F. Jones
On the Evaluation of Entity Profiles BIBAFull-Text 94-99
  Maarten de Rijke; Krisztian Balog; Toine Bogers; Antal van den Bosch

Evaluation Methodologies and Metrics (2)

Evaluating Information Extraction BIBAFull-Text 100-111
  Andrea Esuli; Fabrizio Sebastiani
Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation BIBAFull-Text 112-123
  Guillaume Cabanac; Gilles Hubert; Mohand Boughanem; Claude Chrisment
Automated Component-Level Evaluation: Present and Future BIBAFull-Text 124-135
  Allan Hanbury; Henning Müller

Panels

The Four Ladies of Experimental Evaluation BIBAFull-Text 136-139
  Donna Harman; Noriko Kando; Mounia Lalmas; Carol Peters
A PROMISE for Experimental Evaluation BIBAFull-Text 140-144
  Martin Braschler; Khalid Choukri; Nicola Ferro; Allan Hanbury; Jussi Karlgren; Henning Müller; Vivien Petras; Emanuele Pianta; Maarten de Rijke; Giuseppe Santucci