Generative Information Retrieval Evaluation
Published in Information Access in the Era of Generative AI, 2024
Download: [ Preprint ]
Abstract
In this chapter, we consider generative information retrieval (IR) evaluation from two distinct but interrelated perspectives. First, Large Language Models (LLMs) themselves are rapidly becoming tools for evaluation, with current research indicating that LLMs may be superior to crowdsource workers and other paid assessors on basic relevance judgment tasks. We review past and ongoing related research, including speculation on the future of shared task initiatives, such as the Text Retrieval Conference (TREC), and a discussion on the continuing need for human assessments. Second, we consider the evaluation of emerging LLM-based Generative Information Retrieval (GenIR) systems, including Retrieval-Augmented Generation (RAG) systems. We consider approaches that focus both on the end-to-end evaluation of GenIR systems and on the evaluation of a retrieval component as an element in a RAG system. Going forward, we expect the evaluation of GenIR systems to be at least partially based on LLM-based assessment, creating an apparent circularity, with a system seemingly evaluating its own output. We resolve this apparent circularity in two ways: (1) by viewing LLM-based assessment as a form of “slow search,” where a slower IR system is used for evaluation and training of a faster production IR system, and (2) by recognizing the continuing need to ground evaluation in human assessment, even if the characteristics of that human assessment must change.
Citation
If you find this chapter useful, please cite it using the following BibTeX:
@incollection{alaofi2024genir,
author = {Marwah Alaofi and Negar Arabzadeh and Charles L. A. Clarke and Mark Sanderson},
title = {Generative Information Retrieval Evaluation},
booktitle = {Information Access in the Era of Generative AI},
editor = {Ryen W. White and Chirag Shah},
series = {The Information Retrieval Series},
volume = {51},
pages = {135--159},
publisher = {Springer, Cham},
year = {2025},
doi = {10.1007/978-3-031-73147-1_6},
url = {https://doi.org/10.1007/978-3-031-73147-1_6},
isbn = {978-3-031-73146-4}
}