DS1 spectrogram: OmniGCD: Abstracting Generalized Category Discovery for Modality Agnosticism

OmniGCD: Abstracting Generalized Category Discovery for Modality Agnosticism

April 16, 20262604.14762

Authors

Arnold Wiliem,Kien Nguyen Thanh,Wei Xiang,Clinton Fookes,Jordan Shipard

Abstract

Generalized Category Discovery (GCD) challenges methods to identify known and novel classes using partially labeled data, mirroring human category learning. Unlike prior GCD methods, which operate within a single modality and require dataset-specific fine-tuning, we propose a modality-agnostic GCD approach inspired by the human brain's abstract category formation.

Our $OmniGCD$ leverages modality-specific encoders (e.g., vision, audio, text, remote sensing) to process inputs, followed by dimension reduction to construct a $GCD latent space$, which is transformed at test-time into a representation better suited for clustering using a novel synthetically trained Transformer-based model. To evaluate OmniGCD, we introduce a $zero-shot GCD setting$ where no dataset-specific fine-tuning is allowed, enabling modality-agnostic category discovery. $Trained once on synthetic data$, OmniGCD performs zero-shot GCD across 16 datasets spanning four modalities, improving classification accuracy for known and novel classes over baselines (average percentage point improvement of $+6.2$, $+17.9$, $+1.5$ and $+12.7$ for vision, text, audio and remote sensing). This highlights the importance of strong encoders while decoupling representation learning from category discovery.

Improving modality-agnostic methods will propagate across modalities, enabling encoder development independent of GCD. Our work serves as a benchmark for future modality-agnostic GCD works, paving the way for scalable, human-inspired category discovery.

All code is available $here$

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.