Continuous-Time Dynamic Graphs (CTDGs) are important in many real-world applications, motivating the need for Graph Neural Networks (GNNs) tailored to CTDGs.
This paper provides a comprehensive review of Graph Representation Learning (GRL) on CTDGs, with a focus on Self-Supervised Representation Learning (SSRL).
The authors introduce a theoretical framework that analyzes the expressivity of CTDG models through an Information-Flow (IF) lens, quantifying their ability to propagate and encode temporal and structural information.
The paper also categorizes existing CTDG methods and explores SSRL methods tailored to CTDGs, such as predictive and contrastive approaches, which can reduce the reliance on labeled data.