TY - JOUR
T1 - GCL-GroW
T2 - Graph contrastive learning via group whitening
AU - Zhang, Chunhui
AU - Miao, Rui
AU - Ding, Lizhong
AU - Li, Pengqi
AU - Guo, Yuhan
AU - Li, Xingcan
AU - Yuan, Ye
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2026/4
Y1 - 2026/4
N2 - Graph Neural Networks (GNNs) effectively learn from graph structures, but their performance is constrained by scarce labeled data. Graph contrastive learning (GCL) techniques address this limitation through maximizing the mutual information of two views of the input graph, effectively reducing reliance on labeled data. Nevertheless, existing GCL methods face two main drawbacks: the use of negative samples increases training burden, while relying solely on positive samples often compels the introduction of intricate architectures. To overcome these issues, we propose a novel approach called Graph Contrastive Learning via Group Whitening (GCL-GroW), which is the first to apply feature group whitening and consistency loss to address two fundamental goals in GCL: uniformity and alignment. To ensure uniformity, we apply Zero-Phase Component Analysis (ZCA) group whitening to the positive samples, aiming to reduce feature correlations and avoid dimension collapse, in which all sample representations converge to a single point. To achieve alignment, we use the consistency loss among positive samples, as it encourages the model to generate similar representations for these samples, thereby reducing their distance in the feature space. Notably, GCL-GroW delivers these achievements without relying on asymmetric networks, projection layers, gradient halting, or complex loss functions. Extensive experiments demonstrate that GCL-GroW not only achieves competitive accuracy in node and graph classification tasks across multiple datasets, but also reduces training time and memory, validating its superiority over existing state-of-the-art methods. The code is available at: https://github.com/zhangchunhui2024/GCL-GroW.
AB - Graph Neural Networks (GNNs) effectively learn from graph structures, but their performance is constrained by scarce labeled data. Graph contrastive learning (GCL) techniques address this limitation through maximizing the mutual information of two views of the input graph, effectively reducing reliance on labeled data. Nevertheless, existing GCL methods face two main drawbacks: the use of negative samples increases training burden, while relying solely on positive samples often compels the introduction of intricate architectures. To overcome these issues, we propose a novel approach called Graph Contrastive Learning via Group Whitening (GCL-GroW), which is the first to apply feature group whitening and consistency loss to address two fundamental goals in GCL: uniformity and alignment. To ensure uniformity, we apply Zero-Phase Component Analysis (ZCA) group whitening to the positive samples, aiming to reduce feature correlations and avoid dimension collapse, in which all sample representations converge to a single point. To achieve alignment, we use the consistency loss among positive samples, as it encourages the model to generate similar representations for these samples, thereby reducing their distance in the feature space. Notably, GCL-GroW delivers these achievements without relying on asymmetric networks, projection layers, gradient halting, or complex loss functions. Extensive experiments demonstrate that GCL-GroW not only achieves competitive accuracy in node and graph classification tasks across multiple datasets, but also reduces training time and memory, validating its superiority over existing state-of-the-art methods. The code is available at: https://github.com/zhangchunhui2024/GCL-GroW.
KW - Contrastive learning
KW - Graph neural networks
KW - Graph representation learning
KW - Group whitening
UR - https://www.scopus.com/pages/publications/105023092724
U2 - 10.1016/j.patcog.2025.112757
DO - 10.1016/j.patcog.2025.112757
M3 - Article
AN - SCOPUS:105023092724
SN - 0031-3203
VL - 172
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 112757
ER -