Modeling On-road Trajectories with Multi-task Learning

Kaijun Liu, Sijie Ruan*, Cheng Long, Liang Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing popularity of GPS modules, there are various urban applications such as car navigation relying on trajectory data modeling. In this work, we study the problem of modeling on-road trajectories, which is to predict the next road segment given a partial GPS trajectory. Existing methods that model trajectories with Markov chain or recurrent neural network suffer from various issues, including limited capability of sequential modeling, insufficiency of incorporating the road network context, and lack of capturing the underlying semantics of trajectories. In this article, we propose a new trajectory modeling framework called Multi-task Modeling for Trajectories (MMTraj+), which avoids these issues. Specifically, MMTraj+ uses multi-head self-attention networks for sequential modeling, captures the overall road network as the context information for road segment embedding, and performs an auxiliary task of predicting the trajectory destination information (namely the ID and bearing angle) to better guide the main trajectory modeling task (controlled by a carefully designed gating mechanism). In addition, we tailor MMTraj+ for the cases where the destination information is known by dropping its auxiliary task of predicting the trajectory destination information. Extensive experiments conducted on real-world datasets demonstrate the superiority of the proposed method over the baseline methods.

Original languageEnglish
Article numberART24
JournalACM Transactions on Knowledge Discovery from Data
Volume19
Issue number1
DOIs
Publication statusPublished - 6 Jan 2025

Keywords

  • Bearing angle
  • Multi-task learning
  • Road network
  • Trajectory modeling
  • Transformer

Fingerprint

Dive into the research topics of 'Modeling On-road Trajectories with Multi-task Learning'. Together they form a unique fingerprint.

Cite this