Efficient Fusion of LiDAR and Camera Data for 3D Object Detection of Intelligent Vehicles

  • Yingjuan Tang*
  • , Hongwen He
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Accurate 3D object detection is vital for autonomous driving, requiring efficient multimodal fusion of LiDAR and camera data. Existing fusion methods often struggle with the tradeoff between accuracy and computational efficiency. We propose UIB-FuseNet, which integrates a universal inverted bottleneck (UIB) fusion module to accelerate fusion while maintaining high detection performance. On the nuScenes dataset, UIB-FuseNet outperforms state-of-the-art methods, improving mAP by 2.78%, NDS by 2.03%, and increasing inference speed by 12.9%, demonstrating its effectiveness for real-time applications in autonomous driving.

Original languageEnglish
Pages (from-to)586-591
Number of pages6
JournalYouth Academic Annual Conference of Chinese Association of Automation, YAC
Issue number2025
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event40th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2025 - Zhengzhou, China
Duration: 17 May 202519 May 2025

Keywords

  • 3D object detection
  • LiDAR and camera data
  • Multimodal fusion
  • Universal inverted bottleneck

Fingerprint

Dive into the research topics of 'Efficient Fusion of LiDAR and Camera Data for 3D Object Detection of Intelligent Vehicles'. Together they form a unique fingerprint.

Cite this