Abstract
In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter server (PS) is often a bottleneck. Hierarchical federated learning (HFL) that poses multiple edge servers (ESs) between clients and the PS can partially alleviate communication pressure but still needs the aggregation of model parameters from multiple ESs at the PS. To further reduce communication overhead, we remove the central PS, so that each iteration only completes model training by transmitting the global model between two adjacent ES. We call this serial learning method Sequential FL (SFL). For the first time, we introduced SFL into HFL and proposed a novel algorithm adapted to this combined framework, called Fed-CHS. Convergence results are derived for strongly convex and non-convex loss functions under various data heterogeneity setups, which show comparable convergence performance with the algorithms for HFL or SFL solely. Experimental results provide evidence of the superiority of our proposed Fed-CHS on both communication overhead saving and test accuracy over baseline methods.
Original language | English |
---|---|
Journal | IEEE Transactions on Mobile Computing |
DOIs | |
Publication status | Accepted/In press - 2025 |
Externally published | Yes |
Keywords
- Data heterogeneity
- federated learning
- hierarchical architecture
- sequential federated learning