DS1 spectrogram: Evaluating Post-hoc Explanations of the Transformer-based Genome Language Model DNABERT-2

Evaluating Post-hoc Explanations of the Transformer-based Genome Language Model DNABERT-2

2604.21690

Authors

Bernhard Y. Renard,Isabel Kurth,Paulo Yanez Sarmiento

Abstract

Explaining deep neural network predictions on genome sequences enables biological insight and hypothesis generation-often of greater interest than predictive performance alone. While explanations of convolutional neural networks (CNNs) have been shown to capture relevant patterns in genome sequences, it is unclear whether this transfers to more expressive Transformer-based genome language models (gLMs).

To answer this question, we adapt AttnLRP, an extension of layer-wise relevance propagation to the attention mechanism, and apply it to the state-of-the-art gLM DNABERT-2. Thereby, we propose strategies to transfer explanations from token and nucleotide level.

We evaluate the adaption of AttnLRP on genomic datasets using multiple metrics. Further, we provide an extensive comparison between the explanations of DNABERT-2 and a baseline CNN.

Our results demonstrate that AttnLRP yields reliable explanations corresponding to known biological patterns. Hence, like CNNs, gLMs can also help derive biological insights.

This work contributes to the explainability of gLMs and addresses the comparability of relevance attributions across different architectures.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.