DS1 spectrogram: RETURNN as a Generic Flexible Neural Toolkit with Application to
  Translation and Speech Recognition

RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition

May 14, 20181805.05225

Authors

Albert Zeyer,Tamer Alkhouli,Hermann Ney

Abstract

We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks.

Promising preliminary results on max. expected BLEU training are presented.

We are able to train state-of-the-art models for translation and end-to-end models for speech recognition and show results on WMT 2017 and Switchboard. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.