MCSSAFNet: A multi-scale state-space attention fusion network for RGBT tracking

Chunbo Zhao, Bo Mo*, Dawei Li, Xinchun Wang, Jie Zhao, Junwei Xu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Current cross-modal feature fusion research mainly adopts the deep features of the last layer of the backbone network as inputs, ignoring the utilization of detailed information in the shallow features of the backbone network, leading to certain limitations of the model in coping with the various challenges of rapid changes of the target in cross-modal images. To solve this problem, this paper proposes a novel tracker based on the Multiscale State Space Attention Fusion Network (MCSSAFNet), which realizes the learning and fusion of different modal feature information at different scales by introducing Mamba. On this basis, an adaptive-aware loss function is proposed to adaptively weight the classification loss firstly, to solve the imbalance between the classification score and the localization score by enhancing the learning attention to the difficult samples, and to improve the ability to discriminate the difficult targets. Adaptive weighting is then performed for IoU loss to enhance the learning of high-quality samples while improving the learning of low-quality samples, which in turn improves the model IoU accuracy. Comprehensive experimental validation is carried out on four mainstream RGBT open tracking datasets, namely, RGBT210, RGBT234, LasHeR, and VTUAV, and the experimental results show that the tracking performance of the proposed algorithm outperforms the existing algorithms and achieves a running speed of 37 fps on a GTX 3090 GPU.

源语言英语
文章编号131394
期刊Optics Communications
577
DOI
出版状态已出版 - 3月 2025

引用此