Graph convolutional neural networks (GCNNs) are powerful for analyzing graph-structured data and have been successful in various applications.
Understanding the stability of GCNN models, i.e., how they react to small changes in graph structures, is crucial but currently limited.
This study examines how perturbations in graph topology impact GCNN outputs and presents a new method for assessing model stability.
The proposed probabilistic framework offers insights into the relationship between data properties, graph perturbations, and model stability, validated through experiments.