TY - GEN
T1 - LRPAFL
T2 - 11th IEEE International Conference on Cyber Security and Cloud Computing, CSCloud 2024
AU - Wang, Shuo
AU - Fang, Zhengkang
AU - Gai, Keke
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning realizes distributed machine learning training by sharing the model rather than sharing the local dataset. However, the local dataset may be leaked during model training. While differential privacy techniques can mitigate privacy leakage to some extent, the noise tends to have a significant negative impact on model accuracy. To minimize the impact of noise on model accuracy and protect the privacy of the original data, we propose an Layer-wise Relevance Propagation-based Adaptive Federated Learning (LRPAFL). To ensure local data privacy, we inject adaptive noises that satisfy DP into the training sample according to the correlation between local training data features and the model. Specifically, we set a correlation boundary ct. We only inject an adaptive amount of noise when the correlation between the feature and the model is greater than or equal to ct. Furthermore, to evaluate the performance of our approach, we propose a relationship between privacy budget and accuracy. We theoretically and experimentally analyze the performance of this model. Compared with the baseline method, our method has a better performance and the proposed model reduces the impact of noise on model accuracy while protecting data. For example, compared with the state-of-the-art scheme, the accuracy of RASFL is increased by 2% when ϵ = 1.
AB - Federated learning realizes distributed machine learning training by sharing the model rather than sharing the local dataset. However, the local dataset may be leaked during model training. While differential privacy techniques can mitigate privacy leakage to some extent, the noise tends to have a significant negative impact on model accuracy. To minimize the impact of noise on model accuracy and protect the privacy of the original data, we propose an Layer-wise Relevance Propagation-based Adaptive Federated Learning (LRPAFL). To ensure local data privacy, we inject adaptive noises that satisfy DP into the training sample according to the correlation between local training data features and the model. Specifically, we set a correlation boundary ct. We only inject an adaptive amount of noise when the correlation between the feature and the model is greater than or equal to ct. Furthermore, to evaluate the performance of our approach, we propose a relationship between privacy budget and accuracy. We theoretically and experimentally analyze the performance of this model. Compared with the baseline method, our method has a better performance and the proposed model reduces the impact of noise on model accuracy while protecting data. For example, compared with the state-of-the-art scheme, the accuracy of RASFL is increased by 2% when ϵ = 1.
KW - Adaptive Federated Learning
KW - Differential Privacy
KW - Layer-wise Relevance Propagation
UR - https://www.scopus.com/pages/publications/85201321354
U2 - 10.1109/CSCloud62866.2024.00038
DO - 10.1109/CSCloud62866.2024.00038
M3 - Conference contribution
AN - SCOPUS:85201321354
T3 - Proceedings - 11th IEEE International Conference on Cyber Security and Cloud Computing, CSCloud 2024
SP - 174
EP - 179
BT - Proceedings - 11th IEEE International Conference on Cyber Security and Cloud Computing, CSCloud 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 June 2024 through 30 June 2024
ER -