The ParallelEye dataset: A large collection of virtual images for traffic vision research

Xuan Li, Kunfeng Wang*, Yonglin Tian, Lan Yan, Fang Deng, Fei Yue Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

87 Citations (Scopus)

Abstract

Dataset plays an essential role in the training and testing of traffic vision algorithms. However, the collection and annotation of images from the real world is time-consuming, labor-intensive, and error-prone. Therefore, more and more researchers have begun to explore the virtual dataset, to overcome the disadvantages of real datasets. In this paper, we propose a systematic method to construct large-scale artificial scenes and collect a new virtual dataset (named 'ParallelEye') for the traffic vision research. The Unity3D rendering software is used to simulate environmental changes in the artificial scenes and generate ground-truth labels automatically, including semantic/instance segmentation, object bounding boxes, and so on. In addition, we utilize ParallelEye in combination with real datasets to conduct experiments. The experimental results show the inclusion of virtual data helps to enhance the per-class accuracy in object detection and semantic segmentation. Meanwhile, it is also illustrated that the virtual data with controllable imaging conditions can be used to design evaluation experiments flexibly.

Original languageEnglish
Article number8451919
Pages (from-to)2072-2084
Number of pages13
JournalIEEE Transactions on Intelligent Transportation Systems
Volume20
Issue number6
DOIs
Publication statusPublished - Jun 2019

Keywords

  • ParallelEye
  • Traffic vision
  • artificial scenes
  • complex environments
  • parallel vision
  • virtual dataset

Fingerprint

Dive into the research topics of 'The ParallelEye dataset: A large collection of virtual images for traffic vision research'. Together they form a unique fingerprint.

Cite this