Abstract
Accurate 3D object detection is vital for autonomous driving, requiring efficient multimodal fusion of LiDAR and camera data. Existing fusion methods often struggle with the tradeoff between accuracy and computational efficiency. We propose UIB-FuseNet, which integrates a universal inverted bottleneck (UIB) fusion module to accelerate fusion while maintaining high detection performance. On the nuScenes dataset, UIB-FuseNet outperforms state-of-the-art methods, improving mAP by 2.78%, NDS by 2.03%, and increasing inference speed by 12.9%, demonstrating its effectiveness for real-time applications in autonomous driving.
| Original language | English |
|---|---|
| Pages (from-to) | 586-591 |
| Number of pages | 6 |
| Journal | Youth Academic Annual Conference of Chinese Association of Automation, YAC |
| Issue number | 2025 |
| DOIs | |
| Publication status | Published - 2025 |
| Externally published | Yes |
| Event | 40th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2025 - Zhengzhou, China Duration: 17 May 2025 → 19 May 2025 |
Keywords
- 3D object detection
- LiDAR and camera data
- Multimodal fusion
- Universal inverted bottleneck
Fingerprint
Dive into the research topics of 'Efficient Fusion of LiDAR and Camera Data for 3D Object Detection of Intelligent Vehicles'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver