DS1 spectrogram: DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

May 5, 20262605.03327

Authors

Zhongjing Du,Xu Jiang,Jingqi Tian,Qiaoman Zhang,Jiayu Ding

Abstract

Reinforcement learning is crucial for aligning large language models to perform complex reasoning tasks. However, current algorithms such as Group Relative Policy Optimization suffer from coarse grained, sequence level credit assignment, which severely struggles to isolate pivotal reasoning steps within long Chain of Thought generations.

Furthermore, the standard unbounded Kullback Leibler divergence penalty induces severe gradient instability and mode seeking conservatism, ultimately stifling the discovery of novel reasoning trajectories. To overcome these limitations, we introduce Distribution Guided Policy Optimization, a novel critic free reinforcement learning framework that reinterprets distribution deviation as a guiding signal rather than a rigid penalty.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.