TY - GEN
T1 - Privacy-Preserving and Robust Federated Learning Based on Secret Sharing
AU - Mei, Jiajia
AU - Shen, Xiaodong
AU - Xu, Chang
AU - Zhu, Liehuang
AU - Jin, Guoxie
AU - Sharif, Kashif
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) is a machine learning method that enables model training without centralizing data for integration. However, FL is vulnerable to poisoning attacks, in which an attacker manipulates the malicious clients to corrupt the global model via poisoning their local training data or model updates, resulting in compromised model accuracy and degraded performance. In addition, in FL, although the original data can be trained without leaving the local devices, some attackers can obtain the private information of training participants through model parameters, causing privacy leaks. In order to solve the above problems, we propose a privacy-preserving federated learning robust aggregation scheme based on secret sharing. This scheme is implemented based on secret sharing technology, protecting clients' data privacy while achieving Byzantine-robust. Moreover, our scheme considers the two situations of honest majority and malicious majority of clients; that is, the model can effectively resist poisoning attacks when the proportion of malicious clients is less than 50% or more than 50%. Extensive experiments show that our scheme is secure against various common poisoning attacks and is more robust than some existing aggregation rules, even when malicious actors account for the majority.
AB - Federated learning (FL) is a machine learning method that enables model training without centralizing data for integration. However, FL is vulnerable to poisoning attacks, in which an attacker manipulates the malicious clients to corrupt the global model via poisoning their local training data or model updates, resulting in compromised model accuracy and degraded performance. In addition, in FL, although the original data can be trained without leaving the local devices, some attackers can obtain the private information of training participants through model parameters, causing privacy leaks. In order to solve the above problems, we propose a privacy-preserving federated learning robust aggregation scheme based on secret sharing. This scheme is implemented based on secret sharing technology, protecting clients' data privacy while achieving Byzantine-robust. Moreover, our scheme considers the two situations of honest majority and malicious majority of clients; that is, the model can effectively resist poisoning attacks when the proportion of malicious clients is less than 50% or more than 50%. Extensive experiments show that our scheme is secure against various common poisoning attacks and is more robust than some existing aggregation rules, even when malicious actors account for the majority.
KW - Byzantine robustness
KW - Federated learning
KW - poisoning attack
KW - privacy protection
UR - http://www.scopus.com/inward/record.url?scp=105000199209&partnerID=8YFLogxK
U2 - 10.1109/ISPA63168.2024.00223
DO - 10.1109/ISPA63168.2024.00223
M3 - Conference contribution
AN - SCOPUS:105000199209
T3 - Proceedings - 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
SP - 1643
EP - 1650
BT - Proceedings - 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
Y2 - 30 October 2024 through 2 November 2024
ER -