TDFL: Truth Discovery Based Byzantine Robust Federated Learning

Chang Xu*, Yu Jia, Liehuang Zhu, Chuan Zhang, Guoxie Jin, Kashif Sharif

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Federated learning (FL) enables data owners to train a joint global model without sharing private data. However, it is vulnerable to Byzantine attackers that can launch poisoning attacks to destroy model training. Existing defense strategies rely on the additional datasets to train trustable server models or trusted execution environments to mitigate attacks. Besides, these strategies can only tolerate a small number of malicious users or resist a few types of poisoning attacks. To address these challenges, we design a novel federated learning method TDFL, Truth Discovery based Federated Learning, which can defend against multiple poisoning attacks without additional datasets even when the Byzantine users are ≥ 50%≥50%. Specifically, the TDFL considers different scenarios with different malicious proportions. For Honest-majority setting (Byzantine < 50 <50), we design a special robust truth discovery aggregation scheme to remove malicious model updates, which can assign weights according to users' contribution; for Byzantine-majority setting (Byzantine ≥q 50 ≥50%), we use maximum clique-based filter to guarantee global model quality. To the best of our knowledge, this is the first study that uses truth discovery to defend against poisoning attacks. It is also the first scheme which can achieve strong robustness under multiple kinds of attacks launched by high proportion attackers without root datasets. Extensive comparative experiments are designed with five state-of-the-art aggregation rules under five types of classical poisoning attacks on different datasets. The experimental results demonstrate that TDFL is practical and achieves reasonable Byzantine-robustness.

Original languageEnglish
Pages (from-to)4835-4848
Number of pages14
JournalIEEE Transactions on Parallel and Distributed Systems
Volume33
Issue number12
DOIs
Publication statusPublished - 1 Dec 2022

Keywords

  • Federated learning
  • poisoning attack
  • truth discovery

Fingerprint

Dive into the research topics of 'TDFL: Truth Discovery Based Byzantine Robust Federated Learning'. Together they form a unique fingerprint.

Cite this