HSTI: A Light Hierarchical Spatial-Temporal Interaction Model for Map-Free Trajectory Prediction

Xiaoyang Luo, Shuaiqi Fu, Baolin Gao, Yanan Zhao*, Huachun Tan*, Zeye Song

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Trajectory prediction is a crucial task for autonomous driving, but current models’ reliance on high-definition (HD) maps limits their broader applicability. To cope with this challenge, we propose a novel map-free trajectory prediction method that leverages spatiotemporal attention mechanisms. The method consists of three key stages: 1) we first encode spatial and temporal features separately using spatial and temporal attention mechanisms, 2) we then model spatial and temporal interactions through Crystal Graph Convolutional Networks (CGCN) and Multi-Head Attention (MHA), 3) finally, an adaptive anchor generation technique is introduced to tackle the multimodal trajectory prediction challenge. This self-adaptive technique generates context-specific anchors, enabling accurate prediction of multiple possible future vehicle trajectories. Extensive experiments on the Argoverse1 and V2X-Seq datasets validate the effectiveness of our approach. On the Argoverse1 dataset, our method outperforms CRAT-Pred by 5.8% in minADE and 6.25% in minFDE. On the V2X-Seq dataset, it achieves improvements of 82.6%, 85.1%, and 44.0% in minADE, minFDE, and MR, respectively, compared to the baseline model.

Original languageEnglish
JournalIEEE Transactions on Intelligent Transportation Systems
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • Autonomous driving
  • map-free
  • spatial-temporal modeling
  • trajectory prediction

Fingerprint

Dive into the research topics of 'HSTI: A Light Hierarchical Spatial-Temporal Interaction Model for Map-Free Trajectory Prediction'. Together they form a unique fingerprint.

Cite this