Optimizing Distribution and Feedback for Short LT Codes with Reinforcement Learning

Zijun Qin, Zesong Fei, Jingxuan Huang, Xiaoyun Wang, Ming Xiao, Jinhong Yuan

Research output: Contribution to journalArticlepeer-review

Abstract

Designing short Luby transformation (LT) codes with low overhead and good error performance is crucial and challenging for the deployment of vehicle-to-everything networks, which require high reliability, high spectral efficiency, and low latency. In this paper, we investigate the design of globally optimal transmission strategies that consider interactions between feedback for short LT codes using reinforcement learning (RL), where traditional asymptotic analysis based on random graph theory is known to be inaccurate in this context. First, in order to reduce the decoding overhead of short LT codes, we derive the gradient expression for optimizing the degree distribution of LT codes, and propose a RL-based distribution optimization (RL-DO) algorithm for designing short LT codes. Then, to improve the reliability and overhead of LT codes under limited feedback, we model the feedback optimization problem as a Markov decision process, and propose the RL-based joint feedback and distribution optimization (RL-JFDO) algorithm, which aims to design globally-optimal feedback schemes. Simulations show that our methods have lower decoding overhead, error rate, and decoding complexity compared to existing feedback fountain codes.

Original languageEnglish
Pages (from-to)1
Number of pages1
JournalIEEE Transactions on Communications
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Codes
  • Decoding
  • Electronic mail
  • LT codes
  • Optimization
  • Reinforcement learning
  • Symbols
  • Transmitters
  • feedback
  • optimization
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Optimizing Distribution and Feedback for Short LT Codes with Reinforcement Learning'. Together they form a unique fingerprint.

Cite this