TY - GEN
T1 - Gradient Recalibration for Improved Visibility of Tail Classes in Supervised Contrastive Learning
AU - Zhan, Genze
AU - Li, Xin
AU - Heng, Yong
AU - Zhang, Yan
AU - Wang, Jiaojiao
AU - Zhao, Peiyao
AU - Mu, Meitao
AU - Zhu, Xueying
AU - Wang, Mingzhong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Contrastive learning and supervised contrastive learning (SCL) have proven their effectiveness in graphs. However, they suffer from representation collapse when meet imbalance. To address these, we first proposed a quantitative model, similar to the Thomson problem when all classes are of equal size. It maps classes on the hypersphere where different classes repel each other. Based on this, we theoretically showed that when applied to imbalanced node classification, tail classes will be pushed together due to the dominating repellent forces from head classes. Therefore, we recalibrate the gradient of SCL loss to enforce all classes to maintain a uniform distribution in feature space, improving the visibility of tail classes. Extensive experiments on graph datasets indicates that the proposed method can significantly enhance the uniformity of class representation, thus achieving better performance for imbalanced node classification.
AB - Contrastive learning and supervised contrastive learning (SCL) have proven their effectiveness in graphs. However, they suffer from representation collapse when meet imbalance. To address these, we first proposed a quantitative model, similar to the Thomson problem when all classes are of equal size. It maps classes on the hypersphere where different classes repel each other. Based on this, we theoretically showed that when applied to imbalanced node classification, tail classes will be pushed together due to the dominating repellent forces from head classes. Therefore, we recalibrate the gradient of SCL loss to enforce all classes to maintain a uniform distribution in feature space, improving the visibility of tail classes. Extensive experiments on graph datasets indicates that the proposed method can significantly enhance the uniformity of class representation, thus achieving better performance for imbalanced node classification.
KW - graph neural network
KW - representation collapse
KW - supervised contrastive learning
KW - Thomson problem
UR - http://www.scopus.com/inward/record.url?scp=85201238740&partnerID=8YFLogxK
U2 - 10.1109/CAI59869.2024.00126
DO - 10.1109/CAI59869.2024.00126
M3 - Conference contribution
AN - SCOPUS:85201238740
T3 - Proceedings - 2024 IEEE Conference on Artificial Intelligence, CAI 2024
SP - 644
EP - 649
BT - Proceedings - 2024 IEEE Conference on Artificial Intelligence, CAI 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd IEEE Conference on Artificial Intelligence, CAI 2024
Y2 - 25 June 2024 through 27 June 2024
ER -