TY - JOUR
T1 - Graph Attention Network–Based Deep Reinforcement Learning Scheduling Framework for in-Vehicle Time-Sensitive Networking
AU - Sun, Wenjing
AU - Zou, Yuan
AU - Guan, Nan
AU - Zhang, Xudong
AU - Du, Guodong
AU - Wen, Ya
N1 - Publisher Copyright:
IEEE
PY - 2024
Y1 - 2024
N2 - Time-sensitive networking (TSN) can offer deterministic low-latency communication, making it a critical solution for high-level autonomous vehicle's in-vehicle network. The deterministic transmission of TSN relies on TSN traffic scheduling. To ensure real-time transmission performance and vehicle functional safety, in-vehicle TSN scheduling aims to reduce end-to-end delay. Despite the promising potential of graph neural networks and deep reinforcement learning (DRL) in navigating complex TSN scheduling environments, its application has predominantly been limited to enhancing schedulability without a targeted focus on minimizing delays. This article introduces a DRL in-vehicle TSN scheduling framework based on the graph attention network (GAT). The scheduling problem is abstracted as a delay optimization problem and mapped to a Markov decision process (MDP), which is solved using the proximal policy optimization (PPO) algorithm. The GAT with attention mechanism is incorporated to extract critical information to enhance feature extraction and improve scheduling accuracy. This GAT-based PPO method can achieve high-precision offline scheduling through training, producing low-delay scheduling results. Simulation results demonstrate that the proposed method improves offline scheduling performance compared to other DRL-based scheduling methods. Leveraging the trained neural network, the proposed method can also deliver high robustness in online scheduling under link failure scenarios. It can produce a scheduling solution in just 3.8 s, and the scheduling results for all failure scenarios surpass those of rule-based benchmarking methods.
AB - Time-sensitive networking (TSN) can offer deterministic low-latency communication, making it a critical solution for high-level autonomous vehicle's in-vehicle network. The deterministic transmission of TSN relies on TSN traffic scheduling. To ensure real-time transmission performance and vehicle functional safety, in-vehicle TSN scheduling aims to reduce end-to-end delay. Despite the promising potential of graph neural networks and deep reinforcement learning (DRL) in navigating complex TSN scheduling environments, its application has predominantly been limited to enhancing schedulability without a targeted focus on minimizing delays. This article introduces a DRL in-vehicle TSN scheduling framework based on the graph attention network (GAT). The scheduling problem is abstracted as a delay optimization problem and mapped to a Markov decision process (MDP), which is solved using the proximal policy optimization (PPO) algorithm. The GAT with attention mechanism is incorporated to extract critical information to enhance feature extraction and improve scheduling accuracy. This GAT-based PPO method can achieve high-precision offline scheduling through training, producing low-delay scheduling results. Simulation results demonstrate that the proposed method improves offline scheduling performance compared to other DRL-based scheduling methods. Leveraging the trained neural network, the proposed method can also deliver high robustness in online scheduling under link failure scenarios. It can produce a scheduling solution in just 3.8 s, and the scheduling results for all failure scenarios surpass those of rule-based benchmarking methods.
KW - Deep reinforcement learning (DRL)
KW - in-vehicle networks (IVN)
KW - link failure
KW - time-sensitive networking (TSN)
KW - traffic scheduling
UR - http://www.scopus.com/inward/record.url?scp=85191889226&partnerID=8YFLogxK
U2 - 10.1109/TII.2024.3388669
DO - 10.1109/TII.2024.3388669
M3 - Article
AN - SCOPUS:85191889226
SN - 1551-3203
SP - 1
EP - 12
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
ER -