Examining document model residuals to provide feedback during information retrieval evaluation

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

Abstract

Evaluation of document models for text based Information retrieval is crucial for developing document models that are appropriate for specific domains. Unfortunately, current document model evaluation methods for text retrieval provide no feedback, except for an evaluation score. To improve a model, we must use trial and error. In this article, we examine how we can provide feedback in the document model evaluation process, by providing a method of computing relevance score residuals and document model residuals for a given document-query set. Document model residuals provide us with an indication of where the document model is accurate and where it is not. We derive a simple method of computing the document model residuals using ridge regression. We also provide an analysis of the residuals of two document models, and show how we can use the correlation of document statistics to the residuals to provide statistically significant improvements to the precision of the model.
Original languageEnglish
Title of host publicationProceedings of the Sixteenth Australasian Document Computing Symposium (ADCS 2011), Australian National University, Canberra, ACT, 2 December 2011
PublisherRMIT University
Number of pages8
ISBN (Print)9781921426926
Publication statusPublished - 2011
EventAustralasian Document Computing Symposium -
Duration: 5 Dec 2013 → …

Conference

ConferenceAustralasian Document Computing Symposium
Period5/12/13 → …

Fingerprint

Dive into the research topics of 'Examining document model residuals to provide feedback during information retrieval evaluation'. Together they form a unique fingerprint.

Cite this