DS1 spectrogram: DialoGPT: Large-Scale Generative Pre-training for Conversational
  Response Generation

DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation

November 1, 20191911.00536

Authors

Yizhe Zhang,Siqi Sun,Yen-Chun Chen,Chris Brockett,Xiang Gao

Abstract

We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings.

We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.