TY - GEN
T1 - RPFA-Net
T2 - 2021 IEEE International Intelligent Transportation Systems Conference, ITSC 2021
AU - Xu, Baowei
AU - Zhang, Xinyu
AU - Wang, Li
AU - Hu, Xiaomei
AU - Li, Zhiwei
AU - Pan, Shuyue
AU - Li, Jun
AU - Deng, Yongqiang
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/9/19
Y1 - 2021/9/19
N2 - 3D object detection is a crucial problem in environmental perception for autonomous driving. Currently, most works focused on LiDAR, camera, or their fusion, while very few algorithms involve a RaDAR sensor, especially 4D RaDAR providing 3D position and velocity information. 4D RaDAR can work well in bad weather and has a higher performance than traditional 3D RaDAR, but it also contains lots of noise information and suffers measurement ambiguities. Existing 3D object detection methods can't judge the heading of objects by focusing on local features in sparse point clouds. To better overcome this problem, we propose a new method named RPFA-Net only using a 4D RaDAR, which utilizes a self-attention mechanism instead of PointNet to extract point clouds' global features. These global features containing long-distance information can effectively improve the network's ability to regress the heading angle of objects and enhance detection accuracy. Our method's performance is enhanced by 8.13% of 3D mAP and 5.52% of BEV mAP compared with the baseline. Extensive experiments show that RPFA-Net surpasses state-of-the-art 3D detection methods on Astyx HiRes 2019 dataset. The code and pre-trained models are available at https://github.com/adept-thu/RPFA-Net.git.
AB - 3D object detection is a crucial problem in environmental perception for autonomous driving. Currently, most works focused on LiDAR, camera, or their fusion, while very few algorithms involve a RaDAR sensor, especially 4D RaDAR providing 3D position and velocity information. 4D RaDAR can work well in bad weather and has a higher performance than traditional 3D RaDAR, but it also contains lots of noise information and suffers measurement ambiguities. Existing 3D object detection methods can't judge the heading of objects by focusing on local features in sparse point clouds. To better overcome this problem, we propose a new method named RPFA-Net only using a 4D RaDAR, which utilizes a self-attention mechanism instead of PointNet to extract point clouds' global features. These global features containing long-distance information can effectively improve the network's ability to regress the heading angle of objects and enhance detection accuracy. Our method's performance is enhanced by 8.13% of 3D mAP and 5.52% of BEV mAP compared with the baseline. Extensive experiments show that RPFA-Net surpasses state-of-the-art 3D detection methods on Astyx HiRes 2019 dataset. The code and pre-trained models are available at https://github.com/adept-thu/RPFA-Net.git.
UR - http://www.scopus.com/inward/record.url?scp=85118437547&partnerID=8YFLogxK
U2 - 10.1109/ITSC48978.2021.9564754
DO - 10.1109/ITSC48978.2021.9564754
M3 - Conference contribution
AN - SCOPUS:85118437547
T3 - IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
SP - 3061
EP - 3066
BT - 2021 IEEE International Intelligent Transportation Systems Conference, ITSC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 September 2021 through 22 September 2021
ER -