TY - GEN
T1 - Toward Effective Digraph Representation Learning
T2 - 34th ACM Web Conference, WWW 2025
AU - Li, Xunkai
AU - Su, Daohan
AU - Wu, Zhengyu
AU - Zeng, Guang
AU - Qin, Hongchao
AU - Li, Rong Hua
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/28
Y1 - 2025/4/28
N2 - The q-parameterized magnetic Laplacian serves as the foundation of directed graph (digraph) convolution, enabling this kind of digraph neural network (MagDG) to encode node features and structural insights by complex-domain message passing. Despite their success, limitations still exist: (1) The performance of MagDGs depends on selecting an appropriate q-parameter to construct suitable graph propagation equations in the complex domain. This parameter tuning limits model flexibility and significantly increases manual effort. (2) Most approaches treat all nodes with the same complex-domain propagation and aggregation rules, neglecting their unique digraph contexts. This oversight results in sub-optimal performance. To address the above issues, we propose two key techniques: (1) MAP is crafted to be a plug-and-play complex-domain propagation optimization strategy, enabling seamless integration into any MagDG to improve predictions while enjoying high running efficiency. (2) MAP++ is a new digraph learning framework, further incorporating a learnable mechanism to achieve adaptively edge-wise propagation and node-wise aggregation in the complex domain for better performance. Extensive experiments on 12 datasets demonstrate that MAP enjoys flexibility for it can be incorporated with any MagDG, and scalability as it can deal with web-scale digraphs. MAP++ achieves SOTA predictive performance on 4 different downstream tasks.
AB - The q-parameterized magnetic Laplacian serves as the foundation of directed graph (digraph) convolution, enabling this kind of digraph neural network (MagDG) to encode node features and structural insights by complex-domain message passing. Despite their success, limitations still exist: (1) The performance of MagDGs depends on selecting an appropriate q-parameter to construct suitable graph propagation equations in the complex domain. This parameter tuning limits model flexibility and significantly increases manual effort. (2) Most approaches treat all nodes with the same complex-domain propagation and aggregation rules, neglecting their unique digraph contexts. This oversight results in sub-optimal performance. To address the above issues, we propose two key techniques: (1) MAP is crafted to be a plug-and-play complex-domain propagation optimization strategy, enabling seamless integration into any MagDG to improve predictions while enjoying high running efficiency. (2) MAP++ is a new digraph learning framework, further incorporating a learnable mechanism to achieve adaptively edge-wise propagation and node-wise aggregation in the complex domain for better performance. Extensive experiments on 12 datasets demonstrate that MAP enjoys flexibility for it can be incorporated with any MagDG, and scalability as it can deal with web-scale digraphs. MAP++ achieves SOTA predictive performance on 4 different downstream tasks.
KW - Digraph Neural Networks
KW - Scalability
KW - Semi-Supervised Learning
UR - http://www.scopus.com/inward/record.url?scp=105005138556&partnerID=8YFLogxK
U2 - 10.1145/3696410.3714939
DO - 10.1145/3696410.3714939
M3 - Conference contribution
AN - SCOPUS:105005138556
T3 - WWW 2025 - Proceedings of the ACM Web Conference
SP - 2908
EP - 2923
BT - WWW 2025 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
Y2 - 28 April 2025 through 2 May 2025
ER -