摘要
To address the backdoor attacks in federated learning due to the inherently distributed and privacy-preserving peculiarities, we propose RDFL including four components: selecting the eligible parameters to compute the cosine distance; executing adaptive clustering; detecting and removing the suspicious malicious local models; performing adaptive clipping and noising operations. We evaluate the performance of RDFL compared with the existing baselines on MNIST, FEMNIST, and CIFAR-10 datasets under non-independent and identically distributed scenario, and we consider various attack scenarios, including the different numbers of malicious attackers, distributed backdoor attack, different poison ratios of local data and model poisoning attack. Experimental results show that RDFL can effectively mitigate the backdoor attacks, and outperforms the compared baselines.
源语言 | 英语 |
---|---|
页(从-至) | 118-131 |
页数 | 14 |
期刊 | Future Generation Computer Systems |
卷 | 143 |
DOI | |
出版状态 | 已出版 - 6月 2023 |