Researchers propose a new method called Adversarial Curriculum Graph-Free Knowledge Distillation (ACGKD) for data-free knowledge distillation of graph neural networks.
ACGKD leverages the Binary Concrete distribution to model graph structures and introduces a spatial complexity tuning parameter, reducing the spatial complexity of pseudo-graphs.
The proposed method accelerates the distillation process by enabling efficient gradient computation for the graph structure.
ACGKD achieves state-of-the-art performance in distilling knowledge from GNNs without training data.