TY - JOUR
T1 - RobustPFL
T2 - Robust Personalized Federated Learning
AU - Chen, Guorong
AU - Wang, Wei
AU - Wu, Yufang
AU - Li, Chao
AU - Xu, Guangquan
AU - Ji, Shouling
AU - Li, Tao
AU - Shen, Meng
AU - Han, Yufei
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Conventional federated learning (FL) coordinated by a central server focuses on training a global model and protecting the privacy of clients' training data by storing it locally. However, the statistical heterogeneity hinders the global model from adapting to the non-IID distributions among clients. Moreover, untrusted and unreliable central servers and malicious clients may compromise model integrity and availability, thus degrading the robustness of FL. To address these challenges, we present RobustPFL, a decentralized personalized federated learning (PFL) approach that combines α-based Layer-position Normalized Similarity (α-LNS) and local collaborative training to improve personalized performance while utilizing a blockchain-based committee mechanism to coordinate the aggregation process, thereby achieving high personalized accuracy and robustness. Extensive experiments show that our RobustPFL approach outperforms multiple algorithms, including Local training, FedAvg, FedReptile, Per-FedAvg, FedBN, and SPFL, on MNIST, CIFAR10, EMNIST, and N-BaIoT datasets in four non-IID settings. We also evaluate RobustPFL's effectiveness against attacks - poisoning attacks and free-riding attacks. Particularly, for three prevalent poisoning attacks (backdoor, label flipping, and model poisoning attacks), we compare non-defensive (FedAvg) and defensive (Krum, trimmed mean, Bulyan, FedBN, FLAME, and FangTrmean) methods with our proposed RobustPFL. The results show that our approach achieves significant defensive effects.
AB - Conventional federated learning (FL) coordinated by a central server focuses on training a global model and protecting the privacy of clients' training data by storing it locally. However, the statistical heterogeneity hinders the global model from adapting to the non-IID distributions among clients. Moreover, untrusted and unreliable central servers and malicious clients may compromise model integrity and availability, thus degrading the robustness of FL. To address these challenges, we present RobustPFL, a decentralized personalized federated learning (PFL) approach that combines α-based Layer-position Normalized Similarity (α-LNS) and local collaborative training to improve personalized performance while utilizing a blockchain-based committee mechanism to coordinate the aggregation process, thereby achieving high personalized accuracy and robustness. Extensive experiments show that our RobustPFL approach outperforms multiple algorithms, including Local training, FedAvg, FedReptile, Per-FedAvg, FedBN, and SPFL, on MNIST, CIFAR10, EMNIST, and N-BaIoT datasets in four non-IID settings. We also evaluate RobustPFL's effectiveness against attacks - poisoning attacks and free-riding attacks. Particularly, for three prevalent poisoning attacks (backdoor, label flipping, and model poisoning attacks), we compare non-defensive (FedAvg) and defensive (Krum, trimmed mean, Bulyan, FedBN, FLAME, and FangTrmean) methods with our proposed RobustPFL. The results show that our approach achieves significant defensive effects.
KW - blockchain
KW - committee
KW - cosine similarity
KW - free-riding attack
KW - local collaborative training
KW - Personalized federated learning
KW - poisoning attack
UR - http://www.scopus.com/inward/record.url?scp=85214457878&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2025.3526840
DO - 10.1109/TDSC.2025.3526840
M3 - Article
AN - SCOPUS:85214457878
SN - 1545-5971
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
ER -