DS1 spectrogram: Linear attention is (maybe) all you need (to understand transformer
  optimization)

Linear attention is (maybe) all you need (to understand transformer optimization)

October 2, 20232310.01082

Authors

Kwangjun Ahn,Xiang Cheng,Minhak Song,Chulhee Yun,Ali Jadbabaie

Abstract

Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training Transformers by carefully studying a simple yet canonical linearized shallow Transformer model.

Specifically, we train linear Transformers to solve regression tasks, inspired by J.von Oswald et al.(ICML 2023), and K.Ahn et al.(NeurIPS 2023). Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of Transformer training dynamics.

Consequently, the results obtained in this paper suggest that a simple linearized Transformer model could actually be a valuable, realistic abstraction for understanding Transformer optimization.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.