Abstract
Recent advancements have introduced federated machine learning-based channel state information (CSI) compression before the user equipments (UEs) upload the downlink CSI to the base transceiver station (BTS). However, most existing algorithms impose a high communication overhead due to frequent parameter exchanges between UEs and BTS. In this work, we propose a model splitting approach with a shared model at the BTS and multiple local models at the UEs to reduce communication overhead. Moreover, we implant a pipeline module at the BTS to reduce training time. By limiting exchanges of boundary parameters during forward and backward passes, our algorithm can significantly reduce the exchanged parameters over the benchmarks during federated CSI feedback training.
| Original language | English |
|---|---|
| Pages (from-to) | 19766-19771 |
| Number of pages | 6 |
| Journal | IEEE Transactions on Vehicular Technology |
| Volume | 74 |
| Issue number | 12 |
| DOIs | |
| Publication status | Published - 2025 |
| Externally published | Yes |
Keywords
- CSI codebook learning
- Communication efficiency
- federated training
- pipeline parallelism
Fingerprint
Dive into the research topics of 'Model Splitting Enhanced Communication-Efficient Federated Learning for CSI Feedback'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver