DS1 spectrogram: JudgeMeNot: Personalizing Large Language Models to Emulate Judicial Reasoning in Hebrew

JudgeMeNot: Personalizing Large Language Models to Emulate Judicial Reasoning in Hebrew

April 20, 20262604.18041

Authors

Arnon Sturm,Nir Grinberg,Itay Razumenko

Abstract

Despite significant advances in large language models, personalizing them for individual decision-makers remains an open problem. Here, we introduce a synthetic-organic supervision pipeline that transforms raw judicial decisions into instruction-tuning data, enabling parameter-efficient fine-tuning of personalized models for individual judges in low-resource settings.

We compare our approach to state-of-the-art personalization techniques across three different tasks and settings. The results show that Causal Language Modeling followed by synthetically generated instruction-tuning significantly outperforms all other baselines, providing significant improvements across lexical, stylistic, and semantic similarity.

Notably, our model-generated outputs are indistinguishable from the reasoning of human judges, highlighting the viability of efficient personalization, even in low-resource settings.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.