TY - JOUR
T1 - RAFLS
T2 - RDP-Based Adaptive Federated Learning With Shuffle Model
AU - Wang, Shuo
AU - Gai, Keke
AU - Yu, Jing
AU - Zhu, Liehuang
AU - Wu, Hanghang
AU - Wei, Changzheng
AU - Yan, Ying
AU - Zhang, Hui
AU - Choo, Kim Kwang Raymond
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL) realizes distributed machine learning training via sharing model updates rather than raw data, thus ensuring data privacy. However, an attacker may infer the client's local original data from the model parameter so that original data leakage can be caused. While Differential Privacy (DP) is designed to address data leakage issues in FL, injecting noises during training reduces model accuracy. To minimize the negative impact caused by noises on model accuracy while considering privacy protections, in this article we propose an adaptive FL model, entitled RDP-based Adaptive Federated Learning in Shuffle model (RAFLS). To ensure the dataset privacy of clients, we inject adaptive noises into the client's local model by leveraging the adaptive layer-wise adaptive sensitivity of the local model. Our approach shuffles all local model parameters in order to address privacy explosion concerns caused by high-dimensional aggregation and multiple iterations. We further propose a fine-grained model weight aggregation scheme to aggregate all local models and obtain a global model. Our experiment evaluations demonstrate the proposed RAFLS has a better performance than the state-of-the-art methods in reducing noise's impact on model accuracy while protecting data, i.e., showing that the accuracy of RAFLS increases by 1.54% than that of the baseline scheme when ϵ = 2.0 and FashionMNIST under IID setting.
AB - Federated Learning (FL) realizes distributed machine learning training via sharing model updates rather than raw data, thus ensuring data privacy. However, an attacker may infer the client's local original data from the model parameter so that original data leakage can be caused. While Differential Privacy (DP) is designed to address data leakage issues in FL, injecting noises during training reduces model accuracy. To minimize the negative impact caused by noises on model accuracy while considering privacy protections, in this article we propose an adaptive FL model, entitled RDP-based Adaptive Federated Learning in Shuffle model (RAFLS). To ensure the dataset privacy of clients, we inject adaptive noises into the client's local model by leveraging the adaptive layer-wise adaptive sensitivity of the local model. Our approach shuffles all local model parameters in order to address privacy explosion concerns caused by high-dimensional aggregation and multiple iterations. We further propose a fine-grained model weight aggregation scheme to aggregate all local models and obtain a global model. Our experiment evaluations demonstrate the proposed RAFLS has a better performance than the state-of-the-art methods in reducing noise's impact on model accuracy while protecting data, i.e., showing that the accuracy of RAFLS increases by 1.54% than that of the baseline scheme when ϵ = 2.0 and FashionMNIST under IID setting.
KW - Federated learning
KW - privacy amplification
KW - rényi differential privacy
KW - shuffle model
UR - http://www.scopus.com/inward/record.url?scp=105001063336&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2024.3429503
DO - 10.1109/TDSC.2024.3429503
M3 - Article
AN - SCOPUS:105001063336
SN - 1545-5971
VL - 22
SP - 1181
EP - 1194
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 2
ER -