TY - JOUR
T1 - Mitigating Routing Update Overhead for Traffic Engineering by Combining Destination-Based Routing With Reinforcement Learning
AU - Ye, Minghao
AU - Hu, Yang
AU - Zhang, Junjie
AU - Guo, Zehua
AU - Chao, H. Jonathan
N1 - Publisher Copyright:
© 1983-2012 IEEE.
PY - 2022/9/1
Y1 - 2022/9/1
N2 - Traffic Engineering (TE) is a widely-adopted network operation to optimize network performance and resource utilization. Destination-based routing is supported by legacy routers and more readily deployed than flow-based routing, where the forwarding entries could be frequently updated by TE to accommodate traffic dynamics. However, as the network size grows, destination-based TE could render high time complexity when generating and updating many forwarding entries, which may limit the responsiveness of TE and degrade network performance. In this paper, we propose a novel destination-based TE solution called FlexEntry, which leverages emerging Reinforcement Learning (RL) to reduce the time complexity and routing update overhead while achieving good network performance simultaneously. For each traffic matrix, FlexEntry only updates a few forwarding entries called critical entries for redistributing a small portion of the total traffic to improve network performance. These critical entries are intelligently selected by RL with traffic split ratios optimized by Linear Programming (LP). We find out that the combination of RL and LP is very effective. Our simulation results on six real-world network topologies show that FlexEntry reduces up to 99.3% entry updates on average and generalizes well to unseen traffic matrices with near-optimal load balancing performance.
AB - Traffic Engineering (TE) is a widely-adopted network operation to optimize network performance and resource utilization. Destination-based routing is supported by legacy routers and more readily deployed than flow-based routing, where the forwarding entries could be frequently updated by TE to accommodate traffic dynamics. However, as the network size grows, destination-based TE could render high time complexity when generating and updating many forwarding entries, which may limit the responsiveness of TE and degrade network performance. In this paper, we propose a novel destination-based TE solution called FlexEntry, which leverages emerging Reinforcement Learning (RL) to reduce the time complexity and routing update overhead while achieving good network performance simultaneously. For each traffic matrix, FlexEntry only updates a few forwarding entries called critical entries for redistributing a small portion of the total traffic to improve network performance. These critical entries are intelligently selected by RL with traffic split ratios optimized by Linear Programming (LP). We find out that the combination of RL and LP is very effective. Our simulation results on six real-world network topologies show that FlexEntry reduces up to 99.3% entry updates on average and generalizes well to unseen traffic matrices with near-optimal load balancing performance.
KW - Reinforcement learning
KW - destination-based routing
KW - linear programming
KW - routing update overhead
KW - traffic engineering
UR - http://www.scopus.com/inward/record.url?scp=85135243492&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2022.3191337
DO - 10.1109/JSAC.2022.3191337
M3 - Article
AN - SCOPUS:85135243492
SN - 0733-8716
VL - 40
SP - 2662
EP - 2677
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
IS - 9
ER -