DS1 spectrogram: LEPO: Latent Reasoning Policy Optimization for Large Language Models

LEPO: Latent Reasoning Policy Optimization for Large Language Models

April 20, 20262604.17892

Authors

Yuyan Zhou,Jiarui Yu,Hande Dong,Zhezheng Hao,Hong Wang

Abstract

Recently, latent reasoning has been introduced into large language models (LLMs) to leverage rich information within a continuous space. However, without stochastic sampling, these methods inevitably collapse to deterministic inference, failing to discover diverse reasoning paths.

To bridge the gap, we inject controllable stochasticity into latent reasoning via Gumbel-Softmax, restoring LLMs' exploratory capacity and enhancing their compatibility with Reinforcement Learning (RL). Building on this, we propose \underline{L}atent R**\underline{e**}asoning \underline{P}olicy \underline{O}ptimization~(LEPO), a novel framework that applies RL directly to continuous latent representations. Specifically, in rollout stage, LEPO maintains stochasticity to enable diverse trajectory sampling, while in optimization stage, LEPO constructs a unified gradient estimation for both latent representations and discrete tokens.

Extensive experiments show that LEPO significantly outperforms existing RL methods for discrete and latent reasoning.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.