TY - JOUR
T1 - ScaDyG
T2 - A New Paradigm for Large-Scale Dynamic Graph Learning
AU - Wu, Xiang
AU - Li, Xunkai
AU - Li, Rong Hua
AU - Zhao, Kangfei
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2026
Y1 - 2026
N2 - Dynamic graphs (DGs), which capture time-evolving relationships between graph entities, have widespread real-world applications. To efficiently encode DGs for downstream tasks, most DG neural networks (DGNNs) follow the traditional message-passing mechanism and extend it with time-based techniques. Despite their effectiveness, the growth of historical interactions introduces significant scalability issues, particularly in industry scenarios. To address this limitation, we propose ScaDyG, with the core idea of designing a time-aware scalable learning paradigm as follows: 1) time-aware topology reformulation (TTR): ScaDyG first segments historical interactions into time steps (intra and inter) based on dynamic modeling, enabling weight-free and time-aware graph propagation within preprocessing; 2) dynamic temporal encoding (DTE): To further achieve fine-grained graph propagation within time steps, ScaDyG integrates temporal encoding through a combination of exponential functions in a scalable manner; and 3) hypernetwork-driven message aggregation: after obtaining the propagated features (i.e., messages), ScaDyG utilizes a hypernetwork to analyze historical dependencies, implementing node-wise representation by an adaptive temporal fusion. Extensive experiments on 12datasets demonstrate that ScaDyG performs comparably or even outperforms other SOTA methods in both node and link-level downstream tasks, with fewer learnable parameters and higher efficiency.
AB - Dynamic graphs (DGs), which capture time-evolving relationships between graph entities, have widespread real-world applications. To efficiently encode DGs for downstream tasks, most DG neural networks (DGNNs) follow the traditional message-passing mechanism and extend it with time-based techniques. Despite their effectiveness, the growth of historical interactions introduces significant scalability issues, particularly in industry scenarios. To address this limitation, we propose ScaDyG, with the core idea of designing a time-aware scalable learning paradigm as follows: 1) time-aware topology reformulation (TTR): ScaDyG first segments historical interactions into time steps (intra and inter) based on dynamic modeling, enabling weight-free and time-aware graph propagation within preprocessing; 2) dynamic temporal encoding (DTE): To further achieve fine-grained graph propagation within time steps, ScaDyG integrates temporal encoding through a combination of exponential functions in a scalable manner; and 3) hypernetwork-driven message aggregation: after obtaining the propagated features (i.e., messages), ScaDyG utilizes a hypernetwork to analyze historical dependencies, implementing node-wise representation by an adaptive temporal fusion. Extensive experiments on 12datasets demonstrate that ScaDyG performs comparably or even outperforms other SOTA methods in both node and link-level downstream tasks, with fewer learnable parameters and higher efficiency.
KW - Decoupled paradigm
KW - dynamic graph (DG) representation learning
KW - scalable graph neural network
UR - https://www.scopus.com/pages/publications/105027750025
U2 - 10.1109/TNNLS.2025.3650673
DO - 10.1109/TNNLS.2025.3650673
M3 - Article
AN - SCOPUS:105027750025
SN - 2162-237X
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -