DS1 spectrogram: Beyond the Dirac Delta: Mitigating Diversity Collapse in Reinforcement Fine-Tuning for Versatile Image Generation

Beyond the Dirac Delta: Mitigating Diversity Collapse in Reinforcement Fine-Tuning for Versatile Image Generation

January 18, 20262601.12401v1

Authors

Jinmei Liu,Haoru Li,Zhenhong Sun,Chaofeng Chen,Yatao Bian

Abstract

Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning large-scale generative models, such as diffusion and flow models, to align with complex human preferences and user-specified tasks. A fundamental limitation remains the curse of diversity collapse, where the objective formulation and optimization landscape inherently collapse the policy to a Dirac delta distribution. To address this challenge, we propose DRIFT (DiveRsity-Incentivized Reinforcement Fine-Tuning for Versatile Image Generation), an innovative framework that systematically incentivizes output diversity throughout the on-policy fine-tuning process, reconciling strong task alignment with high generation diversity to enhance versatility essential for applications that demand diverse candidate generations. We approach the problem across three representative perspectives: i) sampling a reward-concentrated subset that filters out reward outliers to prevent premature collapse; ii) prompting with stochastic variations to expand the conditioning space, and iii) optimization of the intra-group diversity with a potential-based reward shaping mechanism. Experimental results show that DRIFT achieves superior Pareto dominance regarding task alignment and generation diversity, yielding a $ 9.08%!\sim! 43.46%$ increase in diversity at equivalent alignment levels and a $ 59.65% !\sim! 65.86%$ increase in alignment at equivalent levels of diversity.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.