Prompt tuning methods for Graph Neural Networks (GNNs) have become popular to address the semantic gap between pre-training and fine-tuning steps.
A new unsupervised prompting method based on consistency regularization through pseudo-labeling is proposed to enhance a pre-trained GNN's generalization without updating parameters and with no labeled data.
The approach aims to align the prompted graphs' distribution with the original data and reduce biased predictions, outperforming state-of-the-art prompting methods in experiments.
The method introduces a challenging problem setup to evaluate GNN prompting methods, emphasizing generalization to a target dataset under covariate shift without updating GNN parameters and with no labeled data.