DS1 spectrogram: Learning Long-term Motion Embeddings for Efficient Kinematics Generation

Learning Long-term Motion Embeddings for Efficient Kinematics Generation

April 13, 20262604.11737

Authors

Josh Susskind,Björn Ommer,Nick Stracke,Kolja Bauer,Stefan Andreas Baumann

Abstract

Understanding and predicting motion is a fundamental component of visual intelligence. Although modern video models exhibit strong comprehension of scene dynamics, exploring multiple possible futures through full video synthesis remains prohibitively inefficient.

We model scene dynamics orders of magnitude more efficiently by directly operating on a long-term motion embedding that is learned from large-scale trajectories obtained from tracker models. This enables efficient generation of long, realistic motions that fulfill goals specified via text prompts or spatial pokes.

To achieve this, we first learn a highly compressed motion embedding with a temporal compression factor of 64x. In this space, we train a conditional flow-matching model to generate motion latents conditioned on task descriptions.

The resulting motion distributions outperform those of both state-of-the-art video models and specialized task-specific approaches.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.