一种低照度场景下的视觉定位技术

Translated title of the contribution: A visual localization technology in low illumination scenes

Leilei Li, Ao Zhong, Jiamei Hao, Jiabin Chen, Yongqiang Han

Research output: Contribution to journalArticlepeer-review

Abstract

In order to solve the problems of excessive image noise and uneven feature extraction in low-light environment caused by insufficient or uneven illumination, a monocular visual localization technology in low-light scenes is proposed. First of all, the low-light sensor is used to collect low-light image information. Aiming at the problem of image noise, an image denoising network based on deep learning is designed, and the network is used to process image noise. Then, the quadtree is used to improve the feature uniform extraction strategy, and the feature tracking effect is improved. The image inter-frame pose is estimated by using epipolar geometry, triangulation and other techniques. Finally, the visual reprojection error equation is constructed, and the bundle adjustment method is used for pose estimation and optimization. The experimental results show that the average location root mean square error of the proposed technology is less than 1.47 m when the trajectory has a closed loop, and the average location root mean square error is less than 4.26 m when the trajectory has no closed loop in the low illumination environment of the light intensity of 0.01 lx.

Translated title of the contributionA visual localization technology in low illumination scenes
Original languageChinese (Traditional)
Pages (from-to)857-865
Number of pages9
JournalZhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology
Volume32
Issue number9
DOIs
Publication statusPublished - Sept 2024

Fingerprint

Dive into the research topics of 'A visual localization technology in low illumination scenes'. Together they form a unique fingerprint.

Cite this