TY - JOUR
T1 - FedMAR
T2 - A Privacy-Preserving and Robust Server-Side Multistage Federated Learning
AU - Shi, Leyu
AU - Gao, Ying
AU - Chen, Chong
AU - Huang, Siquan
AU - Zhao, Jiafeng
AU - Hu, Xiping
AU - Leung, Victor C.M.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2025
Y1 - 2025
N2 - In recent years, federated learning (FL) has continued to evolve with the advent of big data and the large-language model (LLM), but it has also exposed numerous security and privacy issues. As a form of distributed machine learning, FL systems are more susceptible to poisoning attacks because training data are dispersed across different participants; additionally, the training achievement of FL may be subject to low-cost theft by some free-riders. Existing works have addressed defenses against the aforementioned two types of threats, but they often focus on defending against only one type and fail to effectively integrate defenses against multiple types of threats. However, in real-world Internet of Things (IoT) systems, the types of threats are not limited to just one category. In this work, we try to maintain the performance of the global model under poisoning attacks, preserve the privacy of the server under free-riders, and explore the balance between these two aspects. Therefore, this work proposes federated multistage asynchronous roll-back (FedMAR), ensuring the quality of local updates; in addition, this work also provides privacy preservation in the global update process based on Rényi differential privacy (RDP), and offers a certain basis for detecting free-riders. To validate the generalization of the proposed method, we conducted relevant experiments on both image and text datasets, and further investigated the robustness of the proposed method against poisoning attacks, model inversion attacks, data heterogeneity, and other aspects. The testing accuracy of the global model can even be improved by 7.2%.
AB - In recent years, federated learning (FL) has continued to evolve with the advent of big data and the large-language model (LLM), but it has also exposed numerous security and privacy issues. As a form of distributed machine learning, FL systems are more susceptible to poisoning attacks because training data are dispersed across different participants; additionally, the training achievement of FL may be subject to low-cost theft by some free-riders. Existing works have addressed defenses against the aforementioned two types of threats, but they often focus on defending against only one type and fail to effectively integrate defenses against multiple types of threats. However, in real-world Internet of Things (IoT) systems, the types of threats are not limited to just one category. In this work, we try to maintain the performance of the global model under poisoning attacks, preserve the privacy of the server under free-riders, and explore the balance between these two aspects. Therefore, this work proposes federated multistage asynchronous roll-back (FedMAR), ensuring the quality of local updates; in addition, this work also provides privacy preservation in the global update process based on Rényi differential privacy (RDP), and offers a certain basis for detecting free-riders. To validate the generalization of the proposed method, we conducted relevant experiments on both image and text datasets, and further investigated the robustness of the proposed method against poisoning attacks, model inversion attacks, data heterogeneity, and other aspects. The testing accuracy of the global model can even be improved by 7.2%.
KW - Rényi differential privacy (RDP)
KW - federated learning (FL)
KW - gradient leakage
KW - label flipping attack (LFA)
KW - secure aggregation
UR - https://www.scopus.com/pages/publications/105013782783
U2 - 10.1109/JIOT.2025.3600099
DO - 10.1109/JIOT.2025.3600099
M3 - Article
AN - SCOPUS:105013782783
SN - 2327-4662
VL - 12
SP - 47288
EP - 47306
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 22
ER -