TY - JOUR
T1 - Dual Radar
T2 - A Multi-modal Dataset with Dual 4D Radar for Autononous Driving
AU - Zhang, Xinyu
AU - Wang, Li
AU - Chen, Jian
AU - Fang, Cheng
AU - Yang, Guangqi
AU - Wang, Yichen
AU - Yang, Lei
AU - Song, Ziying
AU - Liu, Lin
AU - Zhang, Xiaofei
AU - Xu, Bin
AU - Li, Zhiwei
AU - Yang, Qingshan
AU - Li, Jun
AU - Zhang, Zhenlin
AU - Wang, Weida
AU - Ge, Shuzhi Sam
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - 4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.
AB - 4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.
UR - http://www.scopus.com/inward/record.url?scp=105000344730&partnerID=8YFLogxK
U2 - 10.1038/s41597-025-04698-2
DO - 10.1038/s41597-025-04698-2
M3 - Article
C2 - 40082463
AN - SCOPUS:105000344730
SN - 2052-4463
VL - 12
JO - Scientific data
JF - Scientific data
IS - 1
M1 - 439
ER -