Graph anomaly detection (GAD) is challenging due to scarcity of abnormal nodes and high cost of label annotations.
Graph pre-training has emerged as an effective approach for label-efficient learning in GAD, but the mix of homophily and heterophily in anomalies requires selective filters for individual nodes.
The PAF framework, Pre-Training and Adaptive Fine-tuning, is proposed to address the challenges in GAD by implementing joint training with low- and high-pass filters in the pre-training phase, and using a gated fusion network during fine-tuning.
Experiments on ten benchmark datasets consistently demonstrate the effectiveness of PAF in graph anomaly detection.