DS1 spectrogram: Attend what matters: Leveraging vision foundational models for breast cancer classification using mammograms

Attend what matters: Leveraging vision foundational models for breast cancer classification using mammograms

April 21, 20262604.19350

Authors

Chetan Arora,Samyak Sanghvi,Piyush Miglani,Sarvesh Shashikumar,Kaustubh R Borgavi

Abstract

Vision Transformers $(ViT)$ have become the architecture of choice for many computer vision tasks, yet their performance in computer-aided diagnostics remains limited. Focusing on breast cancer detection from mammograms, we identify two main causes for this shortfall.

First, medical images are high-resolution with small abnormalities, leading to an excessive number of tokens and making it difficult for the softmax-based attention to localize and attend to relevant regions. Second, medical image classification is inherently fine-grained, with low inter-class and high intra-class variability, where standard cross-entropy training is insufficient.

To overcome these challenges, we propose a framework with three key components: (1) Region of interest $(RoI)$ based token reduction using an object detection model to guide attention; (2) contrastive learning between selected $RoI$ to enhance fine-grained discrimination through hard-negative based training; and (3) a $DINOv2$ pretrained $ViT$ that captures localization-aware, fine-grained features instead of global $CLIP$ representations. Experiments on public mammography datasets demonstrate that our method achieves superior performance over existing baselines, establishing its effectiveness and potential clinical utility for large-scale breast cancer screening.

Our code is available for reproducibility here: https://aih-iitd.github.io/publications/attend-what-matters

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.