DS1 spectrogram: KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

March 2, 20262603.01875

Authors

Songming Zhang,Xue Zhang,Tong Zhang,Bojie Hu,Yufeng Chen

Abstract

Knowledge distillation (KD) is an essential technique to compress large language models (LLMs) into smaller ones. However, despite the distinct roles of the student model and the teacher model in KD, most existing frameworks still use a homogeneous training backend (e.g., FSDP and DeepSpeed) for both models, leading to suboptimal training efficiency.

In this paper, we present a novel framework for LLM distillation, termed KDFlow, which features a decoupled architecture and employs SGLang for teacher inference. By bridging the training efficiency of FSDP2 and the inference efficiency of SGLang, KDFlow achieves full utilization of both advantages in a unified system.

Moreover, instead of transferring full logits across different processes, our framework only transmits the teacher's hidden states using zero-copy data transfer and recomputes the logits on the student side, effectively balancing the communication cost and KD performance. Furthermore, our framework supports both off-policy and on-policy distillation and incorporates KD algorithms for cross-tokenizer KD through highly extensible and user-friendly APIs.

Experiments show that KDFlow can achieve 1.44$\times$ to 6.36$\times$ speedup compared to current KD frameworks, enabling researchers to rapidly prototype and scale LLM distillation with minimal engineering overhead. Code is available at: https://github.com/songmzhang/KDFlow

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.