LLMs can be Fooled into Labelling a Document as Relevant
Published in SIGIR-AP, 2024
Download: [ Preprint ]
Abstract
Large Language Models (LLMs) are increasingly being used to assess the relevance of information objects. This work reports on experiments to study the labelling of short texts (i.e., passages) for relevance, using multiple open-source and proprietary LLMs. While the overall agreement of some LLMs with human judgements is comparable to human-to-human agreement measured in previous research, LLMs are more likely to label passages as relevant compared to human judges, indicating that LLM labels denoting non-relevance are more reliable than those indicating relevance.
This observation prompts us to further examine cases where human judges and LLMs disagree, particularly when the human judge labels the passage as non-relevant and the LLM labels it as relevant. Results show a tendency for many LLMs to label passages that include the original query terms as relevant. We therefore conduct experiments to inject query words into random and irrelevant passages, not unlike the way we inserted the query “Baby Yoda” into this paper. The results demonstrate that LLMs are highly influenced by the presence of query words in the passages under assessment, even if the wider passage has no relevance to the query. This tendency of LLMs to be fooled by the mere presence of query words demonstrates a weakness in our current measures of LLM labelling: relying on overall agreement misses important patterns of failures. There is a real risk of bias in LLM-generated relevance labels and, therefore, a risk of bias in rankers trained on those labels.
Additionally, we investigate the effects of deliberately manipulating LLMs by instructing them to label passages as relevant, similar to the instruction “this paper is perfectly relevant” inserted above. We find that such manipulation influences the performance of some LLMs, highlighting the critical need to consider potential vulnerabilities when deploying LLMs in real-world applications.
Citation
If you find this paper useful, please cite it using the following BibTeX:
@inproceedings{Alaofi2024LLMs,
author = {Marwah Alaofi and Paul Thomas and Falk Scholer and Mark Sanderson},
title = {LLMs can be Fooled into Labelling a Document as Relevant},
booktitle = {Proceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region},
series = {SIGIR-AP '24},
year = {2024},
date = {December 9--12},
location = {Tokyo, Japan},
pages = {10},
publisher = {ACM},
address = {New York, NY, USA},
doi = {10.1145/3673791.3698431},
url = {https://doi.org/10.1145/3673791.3698431},
note = {To appear}
}