Abstract
Federated learning (FL) is a distributed machine learning paradigm that enables scattered clients to collaboratively train a shared global model. FL is suitable for privacy-preserving applications due to keeping the training data decentralized. However, FL is susceptible to backdoor attacks which attempt to embed backdoor triggers into the global model during the training process, and later activate them to cause a desired misclassification. In this paper, to effectively defend against backdoor attacks in the FL system, we propose SCFL including three parts: first, the Singular Value Decomposition (SVD) technique is used to extract the significant features of model updates; second, the k-means clustering algorithm is used to cluster the significant features; finally, cosine similarity is used to measure the distance between two model updates and the optimal clients are selected to aggregate the global model after clipping. Unlike most robust algorithms, SCFL does not limit the number of attackers to be less than that of benign clients, nor does restrict the data distribution among all clients to be independent and identically distributed (IID). Moreover, SCFL does not require any auxiliary information outside of the learning process. We conduct extensive experiments including various types of backdoor attacks. Experimental results demonstrate that SCFL can effectively defend against these backdoor attacks and outperform the existing state-of-the-art algorithms.
Original language | English |
---|---|
Article number | 103414 |
Journal | Computers and Security |
Volume | 133 |
DOIs | |
Publication status | Published - Oct 2023 |
Keywords
- Backdoor attack
- Clustering
- Federated learning
- Robust
- SVD