ScaleDRL: A Scalable Deep Reinforcement Learning Approach for Traffic Engineering in SDN with Pinning Control

Penghao Sun, Zehua Guo*, Julong Lan, Junfei Li, Yuxiang Hu, Thar Baker

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

40 引用 (Scopus)

摘要

As modern communication networks become more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Traditional methods usually design a fixed model of the network traffic and solve an objective function to get a TE policy, which cannot ensure the solution efficiency. The emerging Deep Reinforcement Learning (DRL) together with the Software-Defined Networking (SDN) technologies provide us with a chance to design a model-free TE scheme through Machine Learning (ML). However, existing DRL-based TE solutions are all faced with a scalability problem, i.e., the solution cannot be applied to large networks. In this paper, we propose to combine the control theory and DRL technology to achieve an efficient network control scheme for TE. The proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information collected by the SDN controller, we use a DRL algorithm to dynamically adjust a set of link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the network flows can be dynamically adjusted using the dynamic link weights. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art DRL-based TE scheme in different network topologies.

源语言英语
文章编号107891
期刊Computer Networks
190
DOI
出版状态已出版 - 8 5月 2021

指纹

探究 'ScaleDRL: A Scalable Deep Reinforcement Learning Approach for Traffic Engineering in SDN with Pinning Control' 的科研主题。它们共同构成独一无二的指纹。

引用此