DS1 spectrogram: XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence

XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence

February 24, 20262602.21178

Authors

Sepehr Salem Ghahfarokhi,M. Moein Esfahani,Raj Sunderraman,Vince Calhoun,Mohammed Alser

Abstract

Deep learning has significantly advanced automated brain tumor diagnosis, yet clinical adoption remains limited by interpretability and computational constraints. Conventional models often act as opaque "black boxes" and fail to quantify the complex, irregular tumor boundaries that characterize malignant growth.

To address these challenges, we present XMorph, an explainable and computationally efficient framework for fine-grained classification of three prominent brain tumor types: glioma, meningioma, and pituitary tumors. We propose an Information-Weighted Boundary Normalization (IWBN) mechanism that emphasizes diagnostically relevant boundary regions alongside nonlinear chaotic and clinically validated features, enabling a richer morphological representation of tumor growth.

A dual-channel explainable AI module combines GradCAM++ visual cues with LLM-generated textual rationales, translating model reasoning into clinically interpretable insights. The proposed framework achieves a classification accuracy of 96.0%, demonstrating that explainability and high performance can co-exist in AI-based medical imaging systems.

The source code and materials for XMorph are all publicly available at: https://github.com/ALSER-Lab/XMorph.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.