跳到主要导航 跳到搜索 跳到主要内容

OpenMPD: An Open Multimodal Perception Dataset for Autonomous Driving

  • Xinyu Zhang
  • , Zhiwei Li*
  • , Yan Gong
  • , Dafeng Jin
  • , Jun Li
  • , Li Wang
  • , Yanzhang Zhu
  • , Huaping Liu
  • *此作品的通讯作者
  • Tsinghua University

科研成果: 期刊稿件文章同行评审

摘要

Multi-modal sensor fusion techniques have promoted the development of autonomous driving, while perception in the complex environment remains a challenging problem. In order to tackle the problem, we propose the Open Multi-modal Perception dataset (OpenMPD), a multi-modal perception benchmark objected at difficult examples. Compared with existing datasets, OpenMPD focuses more on those complex traffic scenes in urban areas with overexposure or darkness, crowded environment, unstructured roads and intersections. It acquires the multi-modal data through a vehicle with six cameras and four LiDAR for a 360-degree field of view and collected 180 clips of 20-second synchronized images at 20 Hz and point clouds at 10 Hz. Particularly, we applied a 128-beam LiDAR to provide Hi-Res point clouds to better understand the 3D environment and sensor fusion. We sampled 15 K keyframes at equal intervals from clips for annotations, including 2D/3D object detections, 3D object tracking, and 2D semantic segmentation. Moreover, we provide four benchmarks for all tasks to evaluate algorithms and conduct extensive experiments of 2D/3D detection and segmentation on OpenMPD. Data and further information are available at http://www.openmpd.com/.

源语言英语
页(从-至)2437-2447
页数11
期刊IEEE Transactions on Vehicular Technology
71
3
DOI
出版状态已出版 - 1 3月 2022
已对外发布

指纹

探究 'OpenMPD: An Open Multimodal Perception Dataset for Autonomous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此