DS1 spectrogram: FAST: Factorizable Attention for Speeding up Transformers

FAST: Factorizable Attention for Speeding up Transformers

February 12, 20242402.07901

Authors

Armin Gerami,Monte Hoover,Pranav S. Dulepet,Ramani Duraiswami

Abstract

Motivated by the factorization inherent in the original fast multipole method and the improved fast Gauss transform we introduce a factorable form of attention that operates efficiently in high dimensions. This approach reduces the computational and memory complexity of the attention mechanism in transformers from $O(N^2)$ to $O(N)$.

In comparison to previous attempts, our work presents a linearly scaled attention mechanism that maintains the full representation of the attention matrix without compromising on sparsification and incorporates the all-to-all relationship between tokens. We explore the properties of our new attention metric and conduct tests in various standard settings.

Results indicate that our attention mechanism has a robust performance and holds significant promise for diverse applications where self-attention is used.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.