TY - GEN
T1 - Optimizing Flow Completion Time via Adaptive Buffer Management in Data Center Networks
AU - Liu, Sen
AU - Lin, Xiang
AU - Guo, Zehua
AU - Wang, Yi
AU - Serhani, Mohamed Adel
AU - Xu, Yang
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/8/9
Y1 - 2021/8/9
N2 - The traffic of modern data centers exhibits long-tail distribution, in which massive delay-sensitive short flows and a small number of bandwidth-hungry long flows co-exist. These two types of flows could share same bottleneck links in the data center networks but request different or even opposite network requirements. Existing solutions try to realize a trade-off between the requirements of different flows by either prioritizing short flows or limiting the buffer used by long flows at switches or end-hosts. However, they do not consider the dynamic traffic change and suffer from performance degradation, resulted from severe queueing delay and massive packet drops for short flows under current First-In-First- Out (FIFO) queueing mechanism. In this paper, we propose a novel buffer management scheme at switches, called Cut-in Queue (CQ), to achieve both low latency for short flows and high throughput for long flows. Based on network status in real time, CQ prioritizes short flows by dynamically cutting the short flows' packets into the head of long flows or evicting some enqueued long flows' packets and enables high throughput for long flows in most of the cases. Evaluation of both DPDK testbed and NS2 simulations show that CQ outperforms state-of-the-art buffer management schemes by reducing flow completion time by up to 73%.
AB - The traffic of modern data centers exhibits long-tail distribution, in which massive delay-sensitive short flows and a small number of bandwidth-hungry long flows co-exist. These two types of flows could share same bottleneck links in the data center networks but request different or even opposite network requirements. Existing solutions try to realize a trade-off between the requirements of different flows by either prioritizing short flows or limiting the buffer used by long flows at switches or end-hosts. However, they do not consider the dynamic traffic change and suffer from performance degradation, resulted from severe queueing delay and massive packet drops for short flows under current First-In-First- Out (FIFO) queueing mechanism. In this paper, we propose a novel buffer management scheme at switches, called Cut-in Queue (CQ), to achieve both low latency for short flows and high throughput for long flows. Based on network status in real time, CQ prioritizes short flows by dynamically cutting the short flows' packets into the head of long flows or evicting some enqueued long flows' packets and enables high throughput for long flows in most of the cases. Evaluation of both DPDK testbed and NS2 simulations show that CQ outperforms state-of-the-art buffer management schemes by reducing flow completion time by up to 73%.
KW - Data center networks
KW - buffer management
KW - scheduling
UR - http://www.scopus.com/inward/record.url?scp=85117186532&partnerID=8YFLogxK
U2 - 10.1145/3472456.3472507
DO - 10.1145/3472456.3472507
M3 - Conference contribution
AN - SCOPUS:85117186532
T3 - ACM International Conference Proceeding Series
BT - 50th International Conference on Parallel Processing, ICPP 2021 - Main Conference Proceedings
PB - Association for Computing Machinery
T2 - 50th International Conference on Parallel Processing, ICPP 2021
Y2 - 9 August 2021 through 12 August 2021
ER -