TY - JOUR
T1 - G-Routing
T2 - Graph Neural Networks-Based Flexible Online Routing
AU - Wei, Huihong
AU - Zhao, Yi
AU - Xu, Ke
N1 - Publisher Copyright:
© 1986-2012 IEEE.
PY - 2023/7/1
Y1 - 2023/7/1
N2 - Deep reinforcement learning (DRL) has been widely used to find optimal routing schemes to meet various demands of users. However, the optimization goal of DRL is typically static, whereas the network environment is dynamic. Changes in traffic environment or reconfiguration of network equipment often lead to periodic changes in network performance (e.g., throughput degradation and latency peaks). The traditional static target configuration cannot reflect the importance difference of different metrics in the dynamic network environment, resulting in the inflexibility of DRL-based routing algorithms. To address the above issue, we propose G-Routing, an online routing optimization algorithm that uses graph neural networks (GNNs) and DRL. By modeling and comprehending the relationship among different features (e.g., path, flow, and link) of the network, our proposed GNN model can predict the future development of network performance metrics (i.e., latency, throughput, and loss), thereby adjusting the routing algorithm's target promptly. Then, with the DRL model we proposed, the agent can learn the optimal path to adapt to different environmental changes. We implement the G-Routing method on the control plane and perform simulation experiments using real-world network topology and traffic data. Experimental results demonstrate that when the network environment changes significantly, our proposed G-Routing converges faster, achieves lower jitter, and generates more reliable routing scheme.
AB - Deep reinforcement learning (DRL) has been widely used to find optimal routing schemes to meet various demands of users. However, the optimization goal of DRL is typically static, whereas the network environment is dynamic. Changes in traffic environment or reconfiguration of network equipment often lead to periodic changes in network performance (e.g., throughput degradation and latency peaks). The traditional static target configuration cannot reflect the importance difference of different metrics in the dynamic network environment, resulting in the inflexibility of DRL-based routing algorithms. To address the above issue, we propose G-Routing, an online routing optimization algorithm that uses graph neural networks (GNNs) and DRL. By modeling and comprehending the relationship among different features (e.g., path, flow, and link) of the network, our proposed GNN model can predict the future development of network performance metrics (i.e., latency, throughput, and loss), thereby adjusting the routing algorithm's target promptly. Then, with the DRL model we proposed, the agent can learn the optimal path to adapt to different environmental changes. We implement the G-Routing method on the control plane and perform simulation experiments using real-world network topology and traffic data. Experimental results demonstrate that when the network environment changes significantly, our proposed G-Routing converges faster, achieves lower jitter, and generates more reliable routing scheme.
UR - https://www.scopus.com/pages/publications/85176115271
U2 - 10.1109/MNET.012.2300052
DO - 10.1109/MNET.012.2300052
M3 - Article
AN - SCOPUS:85176115271
SN - 0890-8044
VL - 37
SP - 90
EP - 96
JO - IEEE Network
JF - IEEE Network
IS - 4
ER -