An Optimized Multi-sensor Fused Object Detection Method for Intelligent Vehicles

  • Jiayu Shen
  • , Qingxiao Liu
  • , Huiyan Chen*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

An accurate and efficient environment perception system is crucial for intelligent vehicles. This study proposes an optimized 2D object detection method utilizing multi-sensor fusion to improve the performance of the environment perception system. In the sensor fusion module, a depth completion network is used to predict dense depth map, so both dense and sparse RGB-D images can be obtained. Then, an efficient object detection baseline is optimized for intelligent vehicles. This method is verified by KITTI 2D object detection dataset. The experimental results show that the proposed method can be more accurate than many latest methods on KITTI leaderboard. Meanwhile, this method consumes less inference time and shows its high efficiency.

Original languageEnglish
Title of host publication2020 IEEE 5th International Conference on Intelligent Transportation Engineering, ICITE 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages265-270
Number of pages6
ISBN (Electronic)9781728194097
DOIs
Publication statusPublished - Sept 2020
Event5th IEEE International Conference on Intelligent Transportation Engineering, ICITE 2020 - Beijing, China
Duration: 11 Sept 202013 Sept 2020

Publication series

Name2020 IEEE 5th International Conference on Intelligent Transportation Engineering, ICITE 2020

Conference

Conference5th IEEE International Conference on Intelligent Transportation Engineering, ICITE 2020
Country/TerritoryChina
CityBeijing
Period11/09/2013/09/20

Keywords

  • 2D object detection
  • deep learning
  • multi-sensor fusion

Fingerprint

Dive into the research topics of 'An Optimized Multi-sensor Fused Object Detection Method for Intelligent Vehicles'. Together they form a unique fingerprint.

Cite this