TY - GEN
T1 - Rethinking Node-wise Propagation for Large-scale Graph Learning
AU - Li, Xunkai
AU - Ma, Jingyuan
AU - Wu, Zhengyu
AU - Su, Daohan
AU - Zhang, Wentao
AU - Li, Rong Hua
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/5/13
Y1 - 2024/5/13
N2 - Scalable graph neural networks (GNNs) have emerged as a promising technique, which exhibits superior predictive performance and high running efficiency across numerous large-scale graph-based web applications. However, (i) Most scalable GNNs tend to treat all nodes with the same propagation rules, neglecting their topological uniqueness; (ii) Existing node-wise propagation optimization strategies are insufficient on web-scale graphs with intricate topology, where a full portrayal of nodes' local properties is required. Intuitively, different nodes in web-scale graphs possess distinct topological roles, and therefore propagating them indiscriminately or neglecting local contexts may compromise the quality of node representations. To address the above issues, we propose Adaptive Topology-aware Propagation (ATP), which reduces potential high-bias propagation and extracts structural patterns of each node in a scalable manner to improve running efficiency and predictive performance. Remarkably, ATP is crafted to be a plug-and-play node-wise propagation optimization strategy, allowing for offline execution independent of the graph learning process in a new perspective. Therefore, this approach can be seamlessly integrated into most scalable GNNs while remaining orthogonal to existing node-wise propagation optimization strategies. Extensive experiments on 12 datasets have demonstrated the effectiveness of ATP.
AB - Scalable graph neural networks (GNNs) have emerged as a promising technique, which exhibits superior predictive performance and high running efficiency across numerous large-scale graph-based web applications. However, (i) Most scalable GNNs tend to treat all nodes with the same propagation rules, neglecting their topological uniqueness; (ii) Existing node-wise propagation optimization strategies are insufficient on web-scale graphs with intricate topology, where a full portrayal of nodes' local properties is required. Intuitively, different nodes in web-scale graphs possess distinct topological roles, and therefore propagating them indiscriminately or neglecting local contexts may compromise the quality of node representations. To address the above issues, we propose Adaptive Topology-aware Propagation (ATP), which reduces potential high-bias propagation and extracts structural patterns of each node in a scalable manner to improve running efficiency and predictive performance. Remarkably, ATP is crafted to be a plug-and-play node-wise propagation optimization strategy, allowing for offline execution independent of the graph learning process in a new perspective. Therefore, this approach can be seamlessly integrated into most scalable GNNs while remaining orthogonal to existing node-wise propagation optimization strategies. Extensive experiments on 12 datasets have demonstrated the effectiveness of ATP.
KW - graph neural networks
KW - scalability
KW - semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85194081146&partnerID=8YFLogxK
U2 - 10.1145/3589334.3645450
DO - 10.1145/3589334.3645450
M3 - Conference contribution
AN - SCOPUS:85194081146
T3 - WWW 2024 - Proceedings of the ACM Web Conference
SP - 560
EP - 569
BT - WWW 2024 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
T2 - 33rd ACM Web Conference, WWW 2024
Y2 - 13 May 2024 through 17 May 2024
ER -