SiamEFT: adaptive-time feature extraction hybrid network for RGBE multi-domain object tracking

Shuqi Liu, Gang Wang*, Yong Song*, Jinxiang Huang, Yiqian Huang, Ya Zhou, Shiqiang Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Integrating RGB and Event (RGBE) multi-domain information obtained by high-dynamic-range and temporal-resolution event cameras has been considered an effective scheme for robust object tracking. However, existing RGBE tracking methods have overlooked the unique spatio-temporal features over different domains, leading to object tracking failure and inefficiency, especally for objects against complex backgrounds. To address this problem, we propose a novel tracker based on adaptive-time feature extraction hybrid networks, namely Siamese Event Frame Tracker (SiamEFT), which focuses on the effective representation and utilization of the diverse spatio-temporal features of RGBE. We first design an adaptive-time attention module to aggregate event data into frames based on adaptive-time weights to enhance information representation. Subsequently, the SiamEF module and cross-network fusion module combining artificial neural networks and spiking neural networks hybrid network are designed to effectively extract and fuse the spatio-temporal features of RGBE. Extensive experiments on two RGBE datasets (VisEvent and COESOT) show that the SiamEFT achieves a success rate of 0.456 and 0.574, outperforming the state-of-the-art competing methods and exhibiting a 2.3-fold enhancement in efficiency. These results validate the superior accuracy and efficiency of SiamEFT in diverse and challenging scenes.

源语言英语
文章编号1453419
期刊Frontiers in Neuroscience
18
DOI
出版状态已出版 - 2024

指纹

探究 'SiamEFT: adaptive-time feature extraction hybrid network for RGBE multi-domain object tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此