Relative positioning technology of static target based on visual-inertial fusion

Yixiang Wang*, Leilei Li, Lin Liang, Yifei Yang, Zhe Zhang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, the position of the target in the image is obtained by template matching, and then the feature points are tracked and extracted from the target image and the source image, and the initial value of the target depth is obtained by visual measurement. Then solve the nonlinear least squares problem based on visual constraints and inertia constraints to obtain the optimized target depth. Experiments show that the error of depth estimation is no more than 10% of the target distance.

Original languageEnglish
Title of host publication2022 IEEE 2nd International Conference on Electronic Technology, Communication and Information, ICETCI 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages124-128
Number of pages5
ISBN (Electronic)9781728181158
DOIs
Publication statusPublished - 2022
Event2nd IEEE International Conference on Electronic Technology, Communication and Information, ICETCI 2022 - Changchun, China
Duration: 27 May 202229 May 2022

Publication series

Name2022 IEEE 2nd International Conference on Electronic Technology, Communication and Information, ICETCI 2022

Conference

Conference2nd IEEE International Conference on Electronic Technology, Communication and Information, ICETCI 2022
Country/TerritoryChina
CityChangchun
Period27/05/2229/05/22

Keywords

  • Multi-sensor fusion
  • depth estimation
  • nonlinear optimization
  • target recognition
  • visual-inertial fusion

Fingerprint

Dive into the research topics of 'Relative positioning technology of static target based on visual-inertial fusion'. Together they form a unique fingerprint.

Cite this