Graph neural networks (GNNs) are limited by issues like constrained expressiveness, over-smoothing, and limited capacity to model long-range dependencies.
A new framework called Generative Graph Pattern Machine (G$^2$PM) has been introduced to overcome the limitations of message-passing in GNNs.
G$^2$PM uses generative Transformer pre-training over sequences of substructures to learn generalizable representations for graph instances.
On the ogbn-arxiv benchmark, G$^2$PM demonstrates strong scalability even with model sizes up to 60M parameters, outperforming prior generative approaches.