DS1 spectrogram: Two-Stream temporal transformer for video action classification

Two-Stream temporal transformer for video action classification

January 20, 20262601.14086v1

Authors

Nattapong Kurpukdee,Adrian G. Bors

Abstract

Motion representation plays an important role in video understanding and has many applications including action recognition, robot and autonomous guidance or others. Lately, transformer networks, through their self-attention mechanism capabilities, have proved their efficiency in many applications.

In this study, we introduce a new two-stream transformer video classifier, which extracts spatio-temporal information from content and optical flow representing movement information. The proposed model identifies self-attention features across the joint optical flow and temporal frame domain and represents their relationships within the transformer encoder mechanism.

The experimental results show that our proposed methodology provides excellent classification results on three well-known video datasets of human activities.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.