Reinforcement Learning Meets Wireless Networks: A Layering Perspective

Yawen Chen, Yu Liu, Ming Zeng, Umber Saleem, Zhaoming Lu, Xiangming Wen, Depeng Jin, Zhu Han, Tao Jiang, Yong Li*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

20 Citations (Scopus)

Abstract

Driven by the soaring traffic demand and the growing diversity of mobile services, wireless networks are evolving to be increasingly dense and heterogeneous. Accordingly, in such large-scale and complicated wireless networks, optimal controlling is reaching unprecedented levels of complexity while its traditional solutions of handcrafted offline algorithms become inefficient due to high complexity, low robustness, and high overhead. Therefore, reinforcement learning (RL), which enables network entities to learn from their actions and consequences in the interactive network environment, attracts significant attention. In this article, we comprehensively review the applications of RL in wireless networks from a layering perspective. First, we present an overview of the principle, fundamentals, and several advanced models of RL. Then, we review the up-To-date applications of RL in various functionality blocks of different network layers, ranging from the low-level physical layer to the high-level application layer. Finally, we outline a broad spectrum of challenges, open issues, and future research directions of RL-empowered wireless networks.

Original languageEnglish
Article number9201129
Pages (from-to)85-111
Number of pages27
JournalIEEE Internet of Things Journal
Volume8
Issue number1
DOIs
Publication statusPublished - 1 Jan 2021

Keywords

  • Communications
  • optimal controlling
  • protocol layers
  • reinforcement learning (RL)
  • wireless networks

Fingerprint

Dive into the research topics of 'Reinforcement Learning Meets Wireless Networks: A Layering Perspective'. Together they form a unique fingerprint.

Cite this