Skip to main navigation Skip to search Skip to main content

Model Splitting Enhanced Communication-Efficient Federated Learning for CSI Feedback

  • Yanjie Dong
  • , Haijun Zhang
  • , Gaojie Chen
  • , Xiaoyi Fan
  • , Victor C.M. Leung
  • , Xiping Hu*
  • *Corresponding author for this work
  • Shenzhen MSU-BIT University
  • University of Science and Technology Beijing
  • Sun Yat-Sen University

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advancements have introduced federated machine learning-based channel state information (CSI) compression before the user equipments (UEs) upload the downlink CSI to the base transceiver station (BTS). However, most existing algorithms impose a high communication overhead due to frequent parameter exchanges between UEs and BTS. In this work, we propose a model splitting approach with a shared model at the BTS and multiple local models at the UEs to reduce communication overhead. Moreover, we implant a pipeline module at the BTS to reduce training time. By limiting exchanges of boundary parameters during forward and backward passes, our algorithm can significantly reduce the exchanged parameters over the benchmarks during federated CSI feedback training.

Original languageEnglish
Pages (from-to)19766-19771
Number of pages6
JournalIEEE Transactions on Vehicular Technology
Volume74
Issue number12
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • CSI codebook learning
  • Communication efficiency
  • federated training
  • pipeline parallelism

Fingerprint

Dive into the research topics of 'Model Splitting Enhanced Communication-Efficient Federated Learning for CSI Feedback'. Together they form a unique fingerprint.

Cite this