SA-FlowNet: Event-based self-attention optical flow estimation with spiking-analogue neural networks

Fan Yang, Li Su*, Jinxiu Zhao, Xuena Chen, Xiangyu Wang, Na Jiang, Quan Hu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

4 引用 (Scopus)

摘要

Inspired by biological vision mechanism, event-based cameras have been developed to capture continuous object motion and detect brightness changes independently and asynchronously, which overcome the limitations of traditional frame-based cameras. Complementarily, spiking neural networks (SNNs) offer asynchronous computations and exploit the inherent sparseness of spatio-temporal events. Notably, event-based pixel-wise optical flow estimations calculate the positions and relationships of objects in adjacent frames; however, as event camera outputs are sparse and uneven, dense scene information is difficult to generate and the local receptive fields of the neural network also lead to poor moving objects tracking. To address these issues, an improved event-based self-attention optical flow estimation network (SA-FlowNet) that independently uses criss-cross and temporal self-attention mechanisms, directly capturing long-range dependencies and efficiently extracting the temporal and spatial features from the event streams is proposed. In the former mechanism, a cross-domain attention scheme dynamically fusing the temporal-spatial features is introduced. The proposed network adopts a spiking-analogue neural network architecture using an end-to-end learning method and gains significant computational energy benefits especially for SNNs. The state-of-the-art results of the error rate for optical flow prediction on the Multi-Vehicle Stereo Event Camera (MVSEC) dataset compared with the current SNN-based approaches is demonstrated.

源语言英语
页(从-至)925-935
页数11
期刊IET Computer Vision
17
8
DOI
出版状态已出版 - 12月 2023

指纹

探究 'SA-FlowNet: Event-based self-attention optical flow estimation with spiking-analogue neural networks' 的科研主题。它们共同构成独一无二的指纹。

引用此