DS1 spectrogram: All Languages Matter: Understanding and Mitigating Language Bias in Multilingual RAG

All Languages Matter: Understanding and Mitigating Language Bias in Multilingual RAG

April 22, 20262604.20199

Authors

Hongyu Lin,Le Sun,Guozhao Mo,Cheng Zhang,Bo Zheng

Abstract

Multilingual Retrieval-Augmented Generation (mRAG) leverages cross-lingual evidence to ground Large Language Models (LLMs) in global knowledge. However, we show that current mRAG systems suffer from a language bias during reranking, systematically favoring English and the query's native language.

By introducing an estimated oracle evidence analysis, we quantify a substantial performance gap between existing rerankers and the achievable upper bound. Further analysis reveals a critical distributional mismatch: while optimal predictions require evidence scattered across multiple languages, current systems systematically suppress such "answer-critical" documents, thereby limiting downstream generation performance.

To bridge this gap, we propose Language-Agnostic Utility-driven Reranker Alignment (LAURA), which aligns multilingual evidence ranking with downstream generative utility. Experiments across diverse languages and generation models show that LAURA effectively mitigates language bias and consistently improves mRAG performance.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.