基于多传感器融合的协同感知方法

Translated title of the contribution: Collaborative Perception Method Based on Multisensor Fusion

Binglu Wang, Yang Jin, Lei Zhang, Le Zheng, Tianfei Zhou*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

This paper proposes a novel multimodal collaborative perception framework to enhance the situational awareness of autonomous vehicles. First, a multimodal fusion baseline system is built that effectively integrates Light Detection and Ranging (LiDAR) point clouds and camera images. This system provides a comparable benchmark for subsequent research. Second, various well-known feature fusion strategies are investigated in the context of collaborative scenarios, including channel-wise concatenation, element-wise summation, and transformer-based methods. This study aims to seamlessly integrate intermediate representations from different sensor modalities, facilitating an exhaustive assessment of their effects on model performance. Extensive experiments were conducted on a large-scale open-source simulation dataset, i.e., OPV2V. The results showed that attention-based multimodal fusion outperforms alternative solutions, delivering more precise target localization during complex traffic scenarios, thereby enhancing the safety and reliability of autonomous driving systems.

Translated title of the contributionCollaborative Perception Method Based on Multisensor Fusion
Original languageChinese (Traditional)
Pages (from-to)87-96
Number of pages10
JournalJournal of Radars
Volume13
Issue number1
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Collaborative Perception Method Based on Multisensor Fusion'. Together they form a unique fingerprint.

Cite this