TY - JOUR
T1 - BGEFL
T2 - Enabling Communication-Efficient Federated Learning via Bandit Gradient Estimation in Resource-Constrained Networks
AU - Li, Youqi
AU - Li, Fan
AU - Yang, Song
AU - Wang, Yu
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) has achieved state-of-the-art performance in distributed machine learning with privacy preservation, which promotes AIoT. However, FL is restricted by the expensive communication cost due to exchanging a large number of model parameters and model updates (e.g., gradients) between the aggregator and participants in multiple rounds. This could be challenging in resource-constrained networks where devices are often resource-constrained in terms of computation and communication. Existing works mainly focus on improving communication efficiency from local training and model/gradient compression; nevertheless, studying communication efficiency for FL from the perspective of gradient estimation remains unexplored. In this paper, we bridge this gap by conducting a systematic study on gradient estimation for the communication-efficient FL. We propose a bandit-based gradient estimation-aware FL (BGEFL) framework that can directly estimate participants’ gradients with limited bandit feedback (i.e., their local function values). We prove that BGEFL enjoys an O(1) communication complexity, that is a constant-size uplink communication in which each client uploads only one point’s feedback in the uplink. Moreover, our bandit-based gradient estimator is communication-efficient, unbiased, and stable. We prove the convergence performance of BGEFL for training strongly convex, general convex, and non-convex models. Finally, we evaluate our BGEFL over several datasets and the experimental results demonstrate the effectiveness of BGEFL.
AB - Federated learning (FL) has achieved state-of-the-art performance in distributed machine learning with privacy preservation, which promotes AIoT. However, FL is restricted by the expensive communication cost due to exchanging a large number of model parameters and model updates (e.g., gradients) between the aggregator and participants in multiple rounds. This could be challenging in resource-constrained networks where devices are often resource-constrained in terms of computation and communication. Existing works mainly focus on improving communication efficiency from local training and model/gradient compression; nevertheless, studying communication efficiency for FL from the perspective of gradient estimation remains unexplored. In this paper, we bridge this gap by conducting a systematic study on gradient estimation for the communication-efficient FL. We propose a bandit-based gradient estimation-aware FL (BGEFL) framework that can directly estimate participants’ gradients with limited bandit feedback (i.e., their local function values). We prove that BGEFL enjoys an O(1) communication complexity, that is a constant-size uplink communication in which each client uploads only one point’s feedback in the uplink. Moreover, our bandit-based gradient estimator is communication-efficient, unbiased, and stable. We prove the convergence performance of BGEFL for training strongly convex, general convex, and non-convex models. Finally, we evaluate our BGEFL over several datasets and the experimental results demonstrate the effectiveness of BGEFL.
KW - Federated learning
KW - communication complexity
KW - communication efficiency
KW - gradient estimation
UR - https://www.scopus.com/pages/publications/105019521599
U2 - 10.1109/TON.2025.3562869
DO - 10.1109/TON.2025.3562869
M3 - Article
AN - SCOPUS:105019521599
SN - 1063-6692
VL - 33
SP - 2410
EP - 2425
JO - IEEE Transactions on Networking
JF - IEEE Transactions on Networking
IS - 5
ER -