TY - JOUR
T1 - Model layered optimization with contrastive learning for personalized federated learning
AU - Xu, Dawei
AU - Lu, Chentao
AU - Chen, Tian Xin
AU - Zheng, Baokun
AU - Zhang, Chuan
AU - Zhu, Liehuang
AU - Zhao, Jian
N1 - Publisher Copyright:
© 2025 Chongqing University of Posts and Telecommunications.
PY - 2025/12
Y1 - 2025/12
N2 - In federated learning (FL), the distribution of data across different clients leads to the degradation of global model performance in training. Personalized Federated Learning (pFL) can address this problem through global model personalization. Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities, resulting in lagged model convergence and inadequate personalization in non-IID data. In this paper, we propose model layered optimization for feature extractor and classifier (pFedEC), a novel pFL training framework personalized for different layers of the model. Our study divides the model layers into the feature extractor and classifier. We initialize the model's classifiers during model training, while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training, integrating the utilities of the different layers in the entire model. Our extensive experiments show that pFedEC achieves 92.95% accuracy on CIFAR-10, outperforming existing pFL methods by approximately 1.8%. On CIFAR-100 and Tiny-ImageNet, pFedEC improves the accuracy by at least 4.2%, reaching 73.02% and 28.39%, respectively.
AB - In federated learning (FL), the distribution of data across different clients leads to the degradation of global model performance in training. Personalized Federated Learning (pFL) can address this problem through global model personalization. Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities, resulting in lagged model convergence and inadequate personalization in non-IID data. In this paper, we propose model layered optimization for feature extractor and classifier (pFedEC), a novel pFL training framework personalized for different layers of the model. Our study divides the model layers into the feature extractor and classifier. We initialize the model's classifiers during model training, while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training, integrating the utilities of the different layers in the entire model. Our extensive experiments show that pFedEC achieves 92.95% accuracy on CIFAR-10, outperforming existing pFL methods by approximately 1.8%. On CIFAR-100 and Tiny-ImageNet, pFedEC improves the accuracy by at least 4.2%, reaching 73.02% and 28.39%, respectively.
KW - Contrastive learning
KW - Federated learning (FL)
KW - Personalized federated learning (pFL)
KW - Theoretical analysis
UR - https://www.scopus.com/pages/publications/105025111522
U2 - 10.1016/j.dcan.2025.08.011
DO - 10.1016/j.dcan.2025.08.011
M3 - Article
AN - SCOPUS:105025111522
SN - 2468-5925
VL - 11
SP - 1973
EP - 1982
JO - Digital Communications and Networks
JF - Digital Communications and Networks
IS - 6
ER -