Despite advancements in Graph Neural Networks (GNNs), adaptive attacks continue to challenge their robustness.
Certified robustness based on randomized smoothing has emerged as a promising solution, offering provable guarantees that a model's predictions remain stable under adversarial perturbations.
The proposed framework, AuditVotes, integrates randomized smoothing with augmentation and conditional smoothing to improve data quality and prediction consistency.
Experimental results demonstrate that AuditVotes significantly enhances clean accuracy, certified robustness, and empirical robustness for GNNs.