DS1 spectrogram: An Empirical Study on Preference Tuning Generalization and Diversity Under Domain Shift

An Empirical Study on Preference Tuning Generalization and Diversity Under Domain Shift

January 9, 20262601.05882

Authors

Constantinos Karouzos,Xingwei Tan,Nikolaos Aletras

Abstract

Preference tuning aligns pretrained language models to human judgments of quality, helpfulness, or safety by optimizing over explicit preference signals rather than likelihood alone. Prior work has shown that preference-tuning degrades performance and reduces helpfulness when evaluated outside the training domain.

However, the extent to which adaptation strategies mitigate this domain shift remains unexplored. We address this challenge by conducting a comprehensive and systematic study of alignment generalization under domain shift.

We compare five popular alignment objectives and various adaptation strategies from source to target, including target-domain supervised fine-tuning and pseudo-labeling, across summarization and question-answering helpfulness tasks. Our findings reveal systematic differences in generalization across alignment objectives under domain shift.

We show that adaptation strategies based on pseudo-labeling can substantially reduce domain-shift degradation

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.