DS1 spectrogram: ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues

ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues

April 2, 20262604.01925

Authors

Bhaskara Hanuma Vedula,Darshan Anghan,Ishita Goyal,Ponnurangam Kumaraguru,Abhijnan Chakraborty

Abstract

Large Language Models increasingly suppress biased outputs when demographic identity is stated explicitly, yet may still exhibit implicit biases when identity is conveyed indirectly. Existing benchmarks use name based proxies to detect implicit biases, which carry weak associations with many social demographics and cannot extend to dimensions like age or socioeconomic status.

We introduce ImplicitBBQ, a QA benchmark that evaluates implicit bias through characteristic based cues, culturally associated attributes that signal implicitly, across age, gender, region, religion, caste, and socioeconomic status. Evaluating 11 models, we find that implicit bias in ambiguous contexts is over six times higher than explicit bias in open weight models.

Safety prompting and chain-of-thought reasoning fail to substantially close this gap; even few-shot prompting, which reduces implicit bias by 84%, leaves caste bias at four times the level of any other dimension. These findings indicate that current alignment and prompting strategies address the surface of bias evaluation while leaving culturally grounded stereotypic associations largely unresolved.

We publicly release our code and dataset for model providers and researchers to benchmark potential mitigation techniques.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.