Meta Graph Transformer: A Novel Framework for Spatial–Temporal Traffic Prediction

Xue Ye*, Shen Fang, Fang Sun, Chunxia Zhang, Shiming Xiang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

60 引用 (Scopus)

摘要

Accurate traffic prediction is critical for enhancing the performance of intelligent transportation systems. The key challenge to this task is how to properly model the complex dynamics of traffic while respecting and exploiting both spatial and temporal heterogeneity in data. This paper proposes a novel framework called Meta Graph Transformer (MGT) to address this problem. The MGT framework is a generalization of the original transformer, which is used to model vector sequences in natural language processing. Specifically, MGT has an encoder-decoder architecture. The encoder is responsible for encoding historical traffic data into intermediate representations, while the decoder predicts future traffic states autoregressively. The main building blocks of MGT are three types of attention layers named Temporal Self-Attention (TSA), Spatial Self-Attention (SSA), and Temporal Encoder-Decoder Attention (TEDA), respectively. They all have a multi-head structure. TSAs and SSAs are employed by both the encoder and decoder to capture temporal and spatial correlations. TEDAs are employed by the decoder, allowing every position in the decoder to attend all positions in the input sequence temporally. By leveraging multiple graphs, SSA can conduct sparse spatial attention with various inductive biases. To facilitate the model's awareness of temporal and spatial conditions, Spatial–Temporal Embeddings (STEs) are learned from external attributes, which are composed of temporal attributes (e.g. sequential order, time of day) and spatial attributes (e.g. Laplacian eigenmaps). These embeddings are then utilized by all the attention layers via meta-learning, hence endowing these layers with Spatial–Temporal Heterogeneity-Aware (STHA) properties. Experiments on three real-world traffic datasets demonstrate the superiority of our model over several state-of-the-art methods. Our code and data are available at ( http://github.com/lonicera-yx/MGT).

源语言英语
页(从-至)544-563
页数20
期刊Neurocomputing
491
DOI
出版状态已出版 - 28 6月 2022

指纹

探究 'Meta Graph Transformer: A Novel Framework for Spatial–Temporal Traffic Prediction' 的科研主题。它们共同构成独一无二的指纹。

引用此