DS1 spectrogram: What's Behind PPO's Collapse in Long-CoT? Value Optimization Holds the
  Secret

What's Behind PPO's Collapse in Long-CoT? Value Optimization Holds the Secret

March 3, 20252503.01491

Authors

Ruofei Zhu,Tiantian Fan,Lin Yan,Yufeng Yuan,Yu Yue

Abstract

Reinforcement learning (RL) is pivotal for enabling large language models (LLMs) to generate long chains of thought (CoT) for complex tasks like math and reasoning. However, Proximal Policy Optimization (PPO), effective in many RL scenarios, fails in long CoT tasks.

This paper identifies that value initialization bias and reward signal decay are the root causes of PPO's failure. We propose Value-Calibrated PPO (VC-PPO) to address these issues.

In VC-PPO, the value model is pretrained to tackle initialization bias, and the Generalized Advantage Estimation (GAE) computation is decoupled between the actor and critic to mitigate reward signal decay. Experiments on the American Invitational Mathematics Examination (AIME) show that VC-PPO significantly boosts PPO performance.

Ablation studies show that techniques in VC-PPO are essential in enhancing PPO for long CoT tasks.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.