Skip to main content Skip to docs navigation

When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search

View Online →

Abstract

Large language models (LLMs) are increasingly used to assign document relevance labels in information retrieval pipelines, especially in domains lacking human-labeled data. However, different models often disagree on borderline cases, raising concerns about how such disagreement affects downstream retrieval. This study examines labeling disagreement between two open-weight LLMs, LLaMA and Qwen, on a corpus of scholarly abstracts related to Sustainable Development Goals (SDGs) 1, 3, and 7. We isolate disagreement subsets and examine their lexical properties, rank-order behavior, and classification predictability. Our results show that model disagreement is systematic, not random: disagreement cases exhibit consistent lexical patterns, produce divergent top-ranked outputs under shared scoring functions, and are distinguishable with AUCs above 0.74 using simple classifiers. These findings suggest that LLM-based filtering introduces structured variability in document retrieval, even under controlled prompting and shared ranking logic. We propose using classification disagreement as an object of analysis in retrieval evaluation, particularly in policy-relevant or thematic search tasks.

Citation

, , and . . When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search.” In 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, Padua, Italy. As part of LLM4Eval \@SIGIR 2025: The Third Workshop on Large Language Models for Evaluation in Information Retrieval.

BibTeX

@inproceedings{ingram2025llm4eval,
  title = {When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search},
  author = {Ingram, William A. and Banerjee, Bipasha and Fox, Edward A.},
  year = {2025},
  booktitle = {48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  location = {Padua, Italy},
  url = {https://llm4eval.github.io/SIGIR2025/papers/},
  maintitle = {LLM4Eval \@ SIGIR 2025: The Third Workshop on Large Language Models for Evaluation in Information Retrieval}
}