TY - JOUR
T1 - TDFL
T2 - Truth Discovery Based Byzantine Robust Federated Learning
AU - Xu, Chang
AU - Jia, Yu
AU - Zhu, Liehuang
AU - Zhang, Chuan
AU - Jin, Guoxie
AU - Sharif, Kashif
N1 - Publisher Copyright:
© 1990-2012 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - Federated learning (FL) enables data owners to train a joint global model without sharing private data. However, it is vulnerable to Byzantine attackers that can launch poisoning attacks to destroy model training. Existing defense strategies rely on the additional datasets to train trustable server models or trusted execution environments to mitigate attacks. Besides, these strategies can only tolerate a small number of malicious users or resist a few types of poisoning attacks. To address these challenges, we design a novel federated learning method TDFL, Truth Discovery based Federated Learning, which can defend against multiple poisoning attacks without additional datasets even when the Byzantine users are ≥ 50%≥50%. Specifically, the TDFL considers different scenarios with different malicious proportions. For Honest-majority setting (Byzantine < 50 <50), we design a special robust truth discovery aggregation scheme to remove malicious model updates, which can assign weights according to users' contribution; for Byzantine-majority setting (Byzantine ≥q 50 ≥50%), we use maximum clique-based filter to guarantee global model quality. To the best of our knowledge, this is the first study that uses truth discovery to defend against poisoning attacks. It is also the first scheme which can achieve strong robustness under multiple kinds of attacks launched by high proportion attackers without root datasets. Extensive comparative experiments are designed with five state-of-the-art aggregation rules under five types of classical poisoning attacks on different datasets. The experimental results demonstrate that TDFL is practical and achieves reasonable Byzantine-robustness.
AB - Federated learning (FL) enables data owners to train a joint global model without sharing private data. However, it is vulnerable to Byzantine attackers that can launch poisoning attacks to destroy model training. Existing defense strategies rely on the additional datasets to train trustable server models or trusted execution environments to mitigate attacks. Besides, these strategies can only tolerate a small number of malicious users or resist a few types of poisoning attacks. To address these challenges, we design a novel federated learning method TDFL, Truth Discovery based Federated Learning, which can defend against multiple poisoning attacks without additional datasets even when the Byzantine users are ≥ 50%≥50%. Specifically, the TDFL considers different scenarios with different malicious proportions. For Honest-majority setting (Byzantine < 50 <50), we design a special robust truth discovery aggregation scheme to remove malicious model updates, which can assign weights according to users' contribution; for Byzantine-majority setting (Byzantine ≥q 50 ≥50%), we use maximum clique-based filter to guarantee global model quality. To the best of our knowledge, this is the first study that uses truth discovery to defend against poisoning attacks. It is also the first scheme which can achieve strong robustness under multiple kinds of attacks launched by high proportion attackers without root datasets. Extensive comparative experiments are designed with five state-of-the-art aggregation rules under five types of classical poisoning attacks on different datasets. The experimental results demonstrate that TDFL is practical and achieves reasonable Byzantine-robustness.
KW - Federated learning
KW - poisoning attack
KW - truth discovery
UR - http://www.scopus.com/inward/record.url?scp=85139415861&partnerID=8YFLogxK
U2 - 10.1109/TPDS.2022.3205714
DO - 10.1109/TPDS.2022.3205714
M3 - Article
AN - SCOPUS:85139415861
SN - 1045-9219
VL - 33
SP - 4835
EP - 4848
JO - IEEE Transactions on Parallel and Distributed Systems
JF - IEEE Transactions on Parallel and Distributed Systems
IS - 12
ER -