MOD-SLAM:Visual SLAM with Moving Object Detection in Dynamic Environments

Jiarui Hu, Hao Fang, Qingkai Yang, Wenzhong Zha

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

In recent years, the signihcant progress has been made in visual simultaneous localization and mapping(VSLAM). Many present geometric VSLAM systems rely on static and bright environment assumptions, which is not friendly to the generalization of VSLAM in the real world including a large number of challenging scenes. To cope with these challenges, a real-time and robust visual-inertial SLAM system was proposed, which integrates a neural network for moving object detection(MOD) and greatly reduces the negative influence of dynamic objects. We has performed an ablation study to validate the effectiveness and necessity of our proposal. In addition, empirical evaluations on typical datasets, as well as in some usual dynamic environments, show that our novel framework can favorably solve the tracking loss, yield pure point cloud and improve the accuracy of VSLAM.

Original languageEnglish
Title of host publicationProceedings of the 40th Chinese Control Conference, CCC 2021
EditorsChen Peng, Jian Sun
PublisherIEEE Computer Society
Pages4302-4307
Number of pages6
ISBN (Electronic)9789881563804
DOIs
Publication statusPublished - 26 Jul 2021
Event40th Chinese Control Conference, CCC 2021 - Shanghai, China
Duration: 26 Jul 202128 Jul 2021

Publication series

NameChinese Control Conference, CCC
Volume2021-July
ISSN (Print)1934-1768
ISSN (Electronic)2161-2927

Conference

Conference40th Chinese Control Conference, CCC 2021
Country/TerritoryChina
CityShanghai
Period26/07/2128/07/21

Keywords

  • Dynamic Environments
  • Moving Object Detection
  • Visual SLAM

Fingerprint

Dive into the research topics of 'MOD-SLAM:Visual SLAM with Moving Object Detection in Dynamic Environments'. Together they form a unique fingerprint.

Cite this