LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition

Zijie Zhou, Jingyi Xu, Guangming Xiong, Junyi Ma*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Place recognition is one of the most crucial modules for autonomous vehicles to identify places that were previously visited in GPS-invalid environments. Sensor fusion is considered an effective method to overcome the weaknesses of individual sensors. In recent years, multimodal place recognition fusing information from multiple sensors has gathered increasing attention. However, most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion. In this letter, we present a novel neural network named LCPR for robust multimodal place recognition, which fuses LiDAR point clouds with multi-view RGB images to generate discriminative and yaw-rotation invariant representations of the environment. A multi-scale attention-based fusion module is proposed to fully exploit the panoramic views from different modalities of the environment and their correlations. We evaluate our method on the nuScenes dataset, and the experimental results show that our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance while maintaining strong robustness to viewpoint changes.

Original languageEnglish
Pages (from-to)1342-1349
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume9
Issue number2
DOIs
Publication statusPublished - 1 Feb 2024

Keywords

  • Place recognition
  • SLAM
  • deep learning
  • sensor fusion

Fingerprint

Dive into the research topics of 'LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition'. Together they form a unique fingerprint.

Cite this