Optimizing Distribution and Feedback for Short LT Codes with Reinforcement Learning

Zijun Qin, Zesong Fei, Jingxuan Huang, Xiaoyun Wang, Ming Xiao, Jinhong Yuan

科研成果: 期刊稿件文章同行评审

摘要

Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.

源语言英语
页(从-至)1
页数1
期刊IEEE Transactions on Communications
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Optimizing Distribution and Feedback for Short LT Codes with Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此