TY - JOUR
T1 - Coarse-to-Fine Contrastive Learning on Graphs
AU - Zhao, Peiyao
AU - Pan, Yuangang
AU - Li, Xin
AU - Chen, Xu
AU - Tsang, Ivor W.
AU - Liao, Lejian
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024/4/1
Y1 - 2024/4/1
N2 - Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph: 1) the similarity between the original graph and the generated augmented graph gradually decreases and 2) the discrimination between all nodes within each augmented view gradually increases. In this article, we argue that both such prior information can be incorporated (differently) into the CL paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
AB - Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph: 1) the similarity between the original graph and the generated augmented graph gradually decreases and 2) the discrimination between all nodes within each augmented view gradually increases. In this article, we argue that both such prior information can be incorporated (differently) into the CL paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
KW - Contrastive learning (CL)
KW - graph representation learning
KW - learning to rank (L2R)
KW - node representation
KW - self-supervised learning (SSL)
UR - http://www.scopus.com/inward/record.url?scp=85147303210&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2022.3228556
DO - 10.1109/TNNLS.2022.3228556
M3 - Article
C2 - 37018665
AN - SCOPUS:85147303210
SN - 2162-237X
VL - 35
SP - 4622
EP - 4634
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 4
ER -