一种多层多模态融合 3D 目标检测方法

Zhi Guo Zhou, Wen Hao Ma

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Camera and lidar are the key sources of information in autonomous vehicles (AVs). However, in the current 3D object detection tasks, most of the pure point cloud network detection capabilities are better than those of image and laser point cloud fusion networks. Existing studies summarize the reasons for this as the misalignment of view between image and radar information and the difficulty of matching heterogeneous features. Single-stage fusion algorithm is difficult to fully fuse the features of both. For this reason, a nova 3D object detection based on multilayer multimodal fusion (3DMMF) is presented. First, in the early-fusion phase, point clouds are encoded locally by Frustum-RGB-PointPainting (FRP) formed by the 2D detection frame. Then, the encoded point cloud input is combined with the self-attention mechanism context-aware channel to expand the PointPillars detection network. In the later-fusion phase, 2D and 3D candidate boxes are coded as two sets of sparse tensors before they are not greatly suppressed, and the final 3D target detection result is obtained by using the camera lidar object candidates fusion (CLOCs) network. Experiments on KITTI datasets show that this fusion detection method has a significant performance improvement over the baseline of pure point cloud networks, with an average mAP improvement of 6.24%.

投稿的翻译标题3D Object Detection Based on Multilayer Multimodal Fusion
源语言繁体中文
页(从-至)696-708
页数13
期刊Tien Tzu Hsueh Pao/Acta Electronica Sinica
52
3
DOI
出版状态已出版 - 3月 2024

关键词

  • 3D target detection
  • auto-driving
  • multi-sensor fusion
  • point cloud coding
  • self-attention mechanism

指纹

探究 '一种多层多模态融合 3D 目标检测方法' 的科研主题。它们共同构成独一无二的指纹。

引用此

Zhou, Z. G., & Ma, W. H. (2024). 一种多层多模态融合 3D 目标检测方法. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 52(3), 696-708. https://doi.org/10.12263/DZXB.20220593