DS1 spectrogram: What DINO saw: ALiBi positional encoding reduces positional bias in Vision Transformers

What DINO saw: ALiBi positional encoding reduces positional bias in Vision Transformers

March 17, 20262603.16840

Authors

Moritz Pawlowsky,Antonis Vamvakeros,Alexander Weiss,Anja Bielefeld,Samuel J. Cooper

Abstract

Vision transformers (ViTs) - especially feature foundation models like DINOv2 - learn rich representations useful for many downstream tasks. However, architectural choices (such as positional encoding) can lead to these models displaying positional biases and artefacts independent of semantic content.

This makes zero-shot adaption difficult in fields like material science, where images are often cross-sections of homogeneous microstructure (i.e. having no preferred direction).

In this work, we investigate the positional bias in ViTs via linear probing, finding it present across a range of objectives and positional encodings, and subsequently reduce it by finetuning models to use ALiBi relative positional encoding. We demonstrate that these models retain desirable general semantics and their unbiased features can be used successfully in trainable segmentation of complex microscopy images.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.