DS1 spectrogram: Efficient Federated RLHF via Zeroth-Order Policy Optimization

Efficient Federated RLHF via Zeroth-Order Policy Optimization

April 20, 20262604.17747

Authors

Deyi Wang,Qining Zhang,Lei Ying

Abstract

This paper considers reinforcement learning from human feedback in a federated learning setting with resource-constrained agents, such as edge devices. We propose an efficient federated RLHF algorithm, named Partitioned, Sign-based Stochastic Zeroth-order Policy Optimization (Par-S$^2$ZPO).

The algorithm is built on zeroth-order optimization with binary perturbation, resulting in low communication, computation, and memory complexity by design. Our theoretical analysis establishes an upper bound on the convergence rate of Par-S$^2$ZPO, revealing that it is as efficient as its centralized counterpart in terms of sample complexity but converges faster in terms of policy update iterations.

Our experimental results show that it outperforms a FedAvg-based RLHF on four MuJoCo RL tasks.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.