Federated Learning Resilient to Byzantine Attacks and Data Heterogeneity

Shiyuan Zuo, Xingrun Yan, Rongfei Fan*, Han Hu, Hangguan Shan, Tony Q.S. Quek, Puning Zhao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper addresses federated learning (FL) in the context of malicious Byzantine attacks and data heterogeneity. We introduce a novel Robust Average Gradient Algorithm (RAGA), which uses the geometric median for aggregation and allows flexible round number for local updates. Unlike most existing resilient approaches, which base their convergence analysis on strongly-convex loss functions or homogeneously distributed datasets, this work conducts convergence analysis for both strongly-convex and non-convex loss functions over heterogeneous datasets. The theoretical analysis indicates that as long as the fraction of the data from malicious users is less than half, RAGA can achieve convergence at a rate of O(1/T2/3-δ) for non-convex loss functions, where T is the iteration number and δ ∈ (0, 2/3). For strongly-convex loss functions, the convergence rate is linear. Furthermore, the stationary point or global optimal solution is shown to be attainable as data heterogeneity diminishes. Experimental results validate the robustness of RAGA against Byzantine attacks and demonstrate its superior convergence performance compared to baselines under varying intensities of Byzantine attacks on heterogeneous datasets.

Original languageEnglish
JournalIEEE Transactions on Mobile Computing
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Byzantine attack
  • data heterogeneity
  • Federated learning
  • robust aggregation

Fingerprint

Dive into the research topics of 'Federated Learning Resilient to Byzantine Attacks and Data Heterogeneity'. Together they form a unique fingerprint.

Cite this