TY - JOUR
T1 - Optimizing Distribution and Feedback for Short LT Codes with Reinforcement Learning
AU - Qin, Zijun
AU - Fei, Zesong
AU - Huang, Jingxuan
AU - Wang, Xiaoyun
AU - Xiao, Ming
AU - Yuan, Jinhong
N1 - Publisher Copyright:
IEEE
PY - 2024
Y1 - 2024
N2 - Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.
AB - Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.
KW - Codes
KW - Decoding
KW - Electronic mail
KW - LT codes
KW - Optimization
KW - Reinforcement learning
KW - Symbols
KW - Transmitters
KW - feedback
KW - optimization
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85201763935&partnerID=8YFLogxK
U2 - 10.1109/TCOMM.2024.3445303
DO - 10.1109/TCOMM.2024.3445303
M3 - Article
AN - SCOPUS:85201763935
SN - 1558-0857
SP - 1
JO - IEEE Transactions on Communications
JF - IEEE Transactions on Communications
ER -