基于改进图像增强的低照度场景视觉惯性定位方法

Translated title of the contribution: Visual-inertial localization method in low-light scene based on improved image enhancement

Leilei Li, Ao Zhong, Lin Liang, Chunming Lyu, Tao Zuo, Xiaochun Tian

Research output: Contribution to journalArticlepeer-review

Abstract

In order to improve the localization accuracy of the visual-inertial navigation system in the low -light scene, a visual-inertial localization algorithm combined with image enhancement technology is proposed. The camera response model is determined according to the histograms of different exposure images, and the model parameters are determined by curve fitting. The illumination map and exposure matrix of low-light images are determined by nonlinear optimization, and the low-light images are preprocessed according to the camera response model. The optical flow method is used for feature tracking, and the visual error, inertial measurement unit (IMU) error and prior error are used as constraints to construct a tightly-coupled optimization model, so as to achieve more accurate pose estimation. Finally, the method is evaluated using real data collected by on-board equipment. The experimental results show that the proposed method can effectively improve the localization accuracy of the visual-inertial navigation system in the low-light scene. Compared with the method without image enhancement, the localization accuracy increased by 25.59%. Compared with the method before improvement, the localization accuracy increased by 6.38%.

Translated title of the contributionVisual-inertial localization method in low-light scene based on improved image enhancement
Original languageChinese (Traditional)
Pages (from-to)783-789
Number of pages7
JournalZhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology
Volume31
Issue number8
DOIs
Publication statusPublished - Aug 2023

Fingerprint

Dive into the research topics of 'Visual-inertial localization method in low-light scene based on improved image enhancement'. Together they form a unique fingerprint.

Cite this