SA-FlowNet: Event-based self-attention optical flow estimation with spiking-analogue neural networks

Fan Yang, Li Su*, Jinxiu Zhao, Xuena Chen, Xiangyu Wang, Na Jiang, Quan Hu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Inspired by biological vision mechanism, event-based cameras have been developed to capture continuous object motion and detect brightness changes independently and asynchronously, which overcome the limitations of traditional frame-based cameras. Complementarily, spiking neural networks (SNNs) offer asynchronous computations and exploit the inherent sparseness of spatio-temporal events. Notably, event-based pixel-wise optical flow estimations calculate the positions and relationships of objects in adjacent frames; however, as event camera outputs are sparse and uneven, dense scene information is difficult to generate and the local receptive fields of the neural network also lead to poor moving objects tracking. To address these issues, an improved event-based self-attention optical flow estimation network (SA-FlowNet) that independently uses criss-cross and temporal self-attention mechanisms, directly capturing long-range dependencies and efficiently extracting the temporal and spatial features from the event streams is proposed. In the former mechanism, a cross-domain attention scheme dynamically fusing the temporal-spatial features is introduced. The proposed network adopts a spiking-analogue neural network architecture using an end-to-end learning method and gains significant computational energy benefits especially for SNNs. The state-of-the-art results of the error rate for optical flow prediction on the Multi-Vehicle Stereo Event Camera (MVSEC) dataset compared with the current SNN-based approaches is demonstrated.

Original languageEnglish
Pages (from-to)925-935
Number of pages11
JournalIET Computer Vision
Volume17
Issue number8
DOIs
Publication statusPublished - Dec 2023

Keywords

  • computer vision
  • feature extraction
  • motion estimation
  • optical tracking

Fingerprint

Dive into the research topics of 'SA-FlowNet: Event-based self-attention optical flow estimation with spiking-analogue neural networks'. Together they form a unique fingerprint.

Cite this