Verboseness Fission for BM25 Document Length Normalization

Conference paper
Aldo Lipani, Mihai Lupu, Allan Hanbury, Akiko Aizawa
ICTIR '15 Proceedings of the 2015 International Conference on The Theory of Information Retrieval, October 2015,Pages 385-388
Publication year: 2015

BM25 is probably the most well known term weighting model in Information Retrieval. It has, depending on the formula variant at hand, 2 or 3 parameters (k1,  b, and k3). This paper addresses b-the document length normalization parameter. Based on the observation that the two cases previously discussed for length normalization (multi-topicality and verboseness) are actually three: multi-topicality, verboseness with word repetition (repetitiveness) and verboseness with synonyms, we propose and test a new length normalization method that removes the need for a b parameter in BM25. Testing the new method on a set of purposefully varied test collections, we observe that we can  obtain results statistically indistinguishable from the optimal results, therefore removing the need for ground-truth based optimization.

Splitting Water: Precision and Anti-Precision to Reduce Pool Bias

Conference paper
Aldo Lipani, Mihai Lupu, Allan Hanbury
ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '15), ACM, New York, NY, USA, 2015, pp 103-112
Publication year: 2015

For many tasks in evaluation campaigns, especially those modeling narrow domain-specific challenges, lack of participation leads to a potential pooling bias due to the scarce number of pooled runs. It is well known that the reliability of a test collection is proportional to the number of topics and relevance assessments provided for each topic, but also to same extent to the diversity in participation in the challenges. Hence, in this paper we present a new perspective in reducing the pool bias by studying the effect of merging an unpooled run with the pooled runs. We also introduce an indicator used by the bias correction method to decide whether the correction needs to be applied or not. This indicator gives strong clues about the potential of a “good” run tested on an “unfriendly” test collection (i.e. a collection where the pool was contributed to by runs very different from the one at hand). We demonstrate the correctness of our method on a set of fifteen test collections from the Text REtrieval Conference (TREC). We observe a reduction in system ranking error and absolute score difference error.

DASyR(IR) - Document Analysis System for Systematic Reviews (in Information Retrieval)

Conference paper
Florina Piroi, Aldo Lipani, Mihai Lupu, Allan Hanbury
Publication year: 2015

Creating systematic reviews is a painstaking task undertaken especially in domains where experimental results are the primary method to knowledge creation. For the review authors, analysing documents to extract relevant data is a demanding activity. To support the creation of systematic reviews, we have created DASyR—a semi-automatic document analysis system. DASyR is our solution to annotating published papers for the purpose of ontology population. For domains where dictionaries are not existing or inadequate, DASyR relies on a semi-automatic annotation bootstrapping method based on positional Random Indexing, followed by traditional Machine Learning algorithms to extend the annotation set. We provide an example of the method application to a subdomain of Computer Science, the Information Retrieval evaluation domain. The reliance of this domain on large scale experimental studies makes it a perfect domain to test on. We show the utility of DASyR through experimental results for different parameter values for the bootstrap procedure, evaluated in terms of annotator agreement, error rate, precision and recall.

An Initial Analytical Exploration of Retrievability

Conference paper
Aldo Lipani, Mihai Lupu, Akiko Aizawa, Allan Hanbury
ICTIR '15 Proceedings of the 2015 International Conference on The Theory of Information Retrieval, October 2015,Pages 385-388
Publication year: 2015

We approach the problem of retrievability from an analytical perspective, starting with modeling conjunctive and disjunctive queries in a boolean model. We show that this represents an upper bound on retrievability for all other best match algorithms. We follow this with an observation of imbalance in the distribution of retrievability, using the Gini coefficient. Simulation-based experiments show the behavior of the Gini coefficient for retrievability under different types and lengths of queries, as well as different assumptions about the document length distribution in a collection.

Extracting Nanopublications from IR Papers

Conference paper
Aldo Lipani, Florina Piroi, Linda Andersson, Allan Hanbury
Multidisciplinary Information Retrieval, Lecture Notes in Computer Science, Volume 8849, 2014, Pages 53-62
Publication year: 2014

The published scientific results should be reproducible, otherwise the scientific findings reported in the publications are less valued by the community. Several undertakings, like myExperiment, RunMyCode, or DIRECT, contribute to the availability of data, experiments, and algorithms. Some of these experiments and algorithms are even referenced or mentioned in later publications. Generally, research articles that present experimental results only summarize the used algorithms and data. In the better cases, the articles do refer to a web link where the code can be found. We give here an account of our experience with extracting the necessary data to possibly reproduce IR experiments. We also make considerations on automating this information extraction and storing the data as IR nanopublications which can later be queried and aggregated by automated processes, as the need arises.

An Information Retrieval Ontology for Information Retrieval Nanopublications

Conference paper
Aldo Lipani, Florina Piroi, Linda Andersson, Allan Hanbury
Information Access Evaluation. Multimodality, and Interaction Lecture Notes in Computer Science Volume 8685, 2014, pp 44-49
Publication year: 2014

Retrieval experiments produce plenty of data, like various experiment settings and experimental results, that are usually not all included in the published articles. Even if they are mentioned, they are not easily machine-readable. We propose the use of IR nanopublications to describe in a formal language such information. Furthermore, to support the unambiguous description of IR domain aspects, we present a preliminary IR ontology. The use of the IR nanopublications will facilitate the assessment and comparison of IR systems and enhance the degree of reproducibility and reliability of IR research progress.