DS1 spectrogram: Understanding DNNs in Feature Interaction Models: A Dimensional Collapse Perspective

Understanding DNNs in Feature Interaction Models: A Dimensional Collapse Perspective

2604.26489

Authors

Jiancheng Wang,Mingjia Yin,Hao Wang,Enhong Chen

Abstract

DNNs have gained widespread adoption in feature interaction recommendation models. However, there has been a longstanding debate on their roles.

On one hand, some works claim that DNNs possess the ability to implicitly capture high-order feature interactions. Conversely, recent studies have highlighted the limitations of DNNs in effectively learning dot products, specifically second-order interactions, let alone higher-order interactions.

In this paper, we present a novel perspective to understand the effectiveness of DNNs: their impact on the dimensional robustness of the representations. In particular, we conduct extensive experiments involving both parallel DNNs and stacked DNNs.

Our evaluation encompasses an overall study of complete DNN on two feature interaction models, alongside a fine-grained ablation analysis of components within DNNs. Experimental results demonstrate that both parallel and stacked DNNs can effectively mitigate the dimensional collapse of embeddings.

Furthermore, a gradient-based theoretical analysis, supported by empirical evidence, uncovers the underlying mechanisms of dimensional collapse.

Resources

Stay in the loop

Every AI paper that matters, free in your inbox daily.

Details

  • © 2026 takara.ai Ltd
  • Content is sourced from third-party publications.