Fusion of Gaze and Scene Information for Driving Behaviour Recognition: A Graph-Neural-Network- Based Framework

Yangtian Yi, Chao Lu*, Boyang Wang*, Long Cheng, Zirui Li, Jianwei Gong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Accurate recognition of driver behaviours is the basis for a reliable driver assistance system. This paper proposes a novel fusion framework for driver behaviour recognition that utilises the traffic scene and driver gaze information. The proposed framework is based on the graph neural network (GNN) and contains three modules, namely, the gaze analysing (GA) module, scene understanding (SU) module and the information fusion (IF) module. The GA module is used to obtain gaze images of drivers, and extract the gaze features from the images. The SU module provides trajectory predictions for surrounding vehicles, motorcycles, bicycles and other traffic participants. The GA and SU modules are parallel and the outputs of both modules are sent to the IF module that fuses the gaze and scene information using the attention mechanism and recognises the driving behaviours through a combined classifier. The proposed framework is verified on a naturalistic driving dataset. The comparative experiments with the state-of-the-art methods demonstrate that the proposed framework has superior performance for driving behaviour recognition in various situations.

Original languageEnglish
Pages (from-to)8109-8120
Number of pages12
JournalIEEE Transactions on Intelligent Transportation Systems
Volume24
Issue number8
DOIs
Publication statusPublished - 1 Aug 2023

Keywords

  • Driving behaviours
  • data fusion
  • gaze information
  • graph neural network
  • scene information

Fingerprint

Dive into the research topics of 'Fusion of Gaze and Scene Information for Driving Behaviour Recognition: A Graph-Neural-Network- Based Framework'. Together they form a unique fingerprint.

Cite this