Graph learning models are effective in learning representations from graph data in various scenarios.
The 'pre-training, adaptation' scheme is commonly used for training graph learning models.
Graph prompting has emerged as a promising approach during the adaptation phase, allowing trainable prompts while keeping pre-trained models unchanged.
Recent advancements in graph prompting, pre-training methods, mainstream techniques, real-world applications, and future directions are reviewed in this paper.