Calibration and three-dimensional reconstruction with a photorealistic simulator based on the omnidirectional vision system

Ivan Kholodilin, Yuan Li*, Qinglin Wang, Paul David Bourke

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Recent advancements in deep learning require a large amount of the annotated training data containing various terms and conditions of the environment. Thus, developing and testing algorithms for the navigation of mobile robots can be expensive and time-consuming. Motivated by the aforementioned problems, this article presents a photorealistic simulator for the computer vision community working with omnidirectional vision systems. Built using unity, the simulator integrates sensors, mobile robots, and elements of the indoor environment and allows one to generate synthetic photorealistic data sets with automatic ground truth annotations. With the aid of the proposed simulator, two practical applications are studied, namely extrinsic calibration of the vision system and three-dimensional reconstruction of the indoor environment. For the proposed calibration and reconstruction techniques, the processes themselves are simple, robust, and accurate. Proposed methods are evaluated experimentally with data generated by the simulator. The proposed simulator and supporting materials are available online: http://www.ilabit.org.

Original languageEnglish
JournalInternational Journal of Advanced Robotic Systems
Volume18
Issue number6
DOIs
Publication statusPublished - 8 Dec 2021

Keywords

  • Calibration
  • measurements
  • omnidirectional vision
  • simulation
  • structured light

Fingerprint

Dive into the research topics of 'Calibration and three-dimensional reconstruction with a photorealistic simulator based on the omnidirectional vision system'. Together they form a unique fingerprint.

Cite this