GCANet: A Cross-Modal Pedestrian Detection Method Based on Gaussian Cross Attention Network

Peiran Peng, Feng Mu, Peilin Yan, Liqiang Song, Hui Li, Yu Chen, Jianan Li, Tingfa Xu*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

1 引用 (Scopus)

摘要

Pedestrian detection is a critical but challenging research field widely applicable in self-driving, surveillance and robotics. The performance of pedestrian detection is not ideal under the limitation of imaging conditions, especially at night or occlusion. To overcome these obstacles, we propose a cross-modal pedestrian detection network based on Gaussian Cross Attention (GCANet) improving the detection performance by a full use of multi-modal features. Through the bidirectional coupling of local features of different modals, the feature interaction and fusion between different modals are realized, and the salient features between multi-modal are effectively emphasized, thus improving the detection accuracy. Experimental results demonstrate GCANet achieves the highest accuracy with the state-of-the-art on KAIST multi-modal pedestrian dataset.

源语言英语
主期刊名Intelligent Computing - Proceedings of the 2022 Computing Conference
编辑Kohei Arai
出版商Springer Science and Business Media Deutschland GmbH
520-530
页数11
ISBN(印刷版)9783031104633
DOI
出版状态已出版 - 2022
活动Computing Conference, 2022 - Virtual, Online
期限: 14 7月 202215 7月 2022

出版系列

姓名Lecture Notes in Networks and Systems
507 LNNS
ISSN(印刷版)2367-3370
ISSN(电子版)2367-3389

会议

会议Computing Conference, 2022
Virtual, Online
时期14/07/2215/07/22

指纹

探究 'GCANet: A Cross-Modal Pedestrian Detection Method Based on Gaussian Cross Attention Network' 的科研主题。它们共同构成独一无二的指纹。

引用此