TY - GEN
T1 - Boosting Graph Convolution with Disparity-induced Structural Refinement
AU - Huang, Sujia
AU - Pi, Yueyang
AU - Zhang, Tong
AU - Liu, Wenzhe
AU - Cui, Zhen
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/28
Y1 - 2025/4/28
N2 - Graph Neural Networks (GNNs) have expressed remarkable capability in processing graph-structured data. Recent studies have found that most GNNs rely on the homophily assumption of graphs, leading to unsatisfactory performance on heterophilous graphs. While certain methods have been developed to address heterophilous links, they lack more precise estimation of high-order relationships between nodes. This could result in the aggregation of excessive interference information during message propagation, thus degrading the representation ability of learned features. In this work, we propose a Disparity-induced Structural Refinement (DSR) method that enables adaptive and selective message propagation in GNN, to enhance representation learning in heterophilous graphs. We theoretically analyze the necessity of structural refinement during message passing grounded in the derivation of error bound for node classification. To this end, we design a disparity score that combines both features and structural information at the node level, reflecting the connectivity degree of hopping neighbor nodes. Based on the disparity score, we can adjust the aggregation of neighbor nodes, thereby mitigating the impact of irrelevant information during message passing. Experimental results demonstrate that our method achieves competitive performance, mostly outperforming advanced methods on both homophilous and heterophilous datasets.
AB - Graph Neural Networks (GNNs) have expressed remarkable capability in processing graph-structured data. Recent studies have found that most GNNs rely on the homophily assumption of graphs, leading to unsatisfactory performance on heterophilous graphs. While certain methods have been developed to address heterophilous links, they lack more precise estimation of high-order relationships between nodes. This could result in the aggregation of excessive interference information during message propagation, thus degrading the representation ability of learned features. In this work, we propose a Disparity-induced Structural Refinement (DSR) method that enables adaptive and selective message propagation in GNN, to enhance representation learning in heterophilous graphs. We theoretically analyze the necessity of structural refinement during message passing grounded in the derivation of error bound for node classification. To this end, we design a disparity score that combines both features and structural information at the node level, reflecting the connectivity degree of hopping neighbor nodes. Based on the disparity score, we can adjust the aggregation of neighbor nodes, thereby mitigating the impact of irrelevant information during message passing. Experimental results demonstrate that our method achieves competitive performance, mostly outperforming advanced methods on both homophilous and heterophilous datasets.
KW - Graph Neural Network
KW - Homophily and Heterophily
KW - Message Passing
KW - Structural Learning
UR - http://www.scopus.com/inward/record.url?scp=105005140484&partnerID=8YFLogxK
U2 - 10.1145/3696410.3714786
DO - 10.1145/3696410.3714786
M3 - Conference contribution
AN - SCOPUS:105005140484
T3 - WWW 2025 - Proceedings of the ACM Web Conference
SP - 2209
EP - 2221
BT - WWW 2025 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
T2 - 34th ACM Web Conference, WWW 2025
Y2 - 28 April 2025 through 2 May 2025
ER -