This study focuses on exploring the importance of pairwise interactions in constructing feature graphs for Graph Neural Networks (GNNs).
The research leverages existing GNN models and tools to analyze the relationship between feature graph structures and their effectiveness in modeling interactions.
Experiments on synthesized datasets reveal that edges connecting interacting features play a crucial role in enabling GNNs to effectively model feature interactions.
Including non-interaction edges in the feature graph can introduce noise and degrade model performance.
The study also introduces theoretical support for selecting sparse feature graphs based on the Minimum Description Length (MDL) principle.
Sparse feature graphs, retaining only necessary interaction edges, are shown to provide a more efficient and interpretable representation compared to complete graphs, in line with Occam's Razor.
The findings offer theoretical insights and practical guidelines for designing feature graphs to enhance the performance and interpretability of GNN models.