Abstract
Federated learning is a distributed machine learning approach that enables multiple participants to collaboratively train a model without sharing their data, thus preserving privacy. However, the decentralized nature of federated learning also makes it susceptible to backdoor attacks, where malicious participants can embed hidden vulnerabilities within the model. Addressing these threats efficiently and effectively is crucial, especially given the impracticality of iterative and resource-intensive detection methods in federated learning environments. This article presents a novel framework for one-shot backdoor removal in federated learning. Our approach integrates advanced anomaly detection techniques with a unique model update aggregation strategy, allowing for the identification and neutralization of backdoor influences in a single update cycle without the need for extensive data access or communication between participants. Extensive experiments across various federated architectures and data distributions demonstrate that our method effectively mitigates backdoor threats while maintaining model performance and scalability. This work not only enhances the security of federated models but also contributes to the broader applicability of federated learning in sensitive and critical domains.
Original language | English |
---|---|
Pages (from-to) | 37718-37730 |
Number of pages | 13 |
Journal | IEEE Internet of Things Journal |
Volume | 11 |
Issue number | 23 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Backdoor attack
- federated learning
- machine learning