弱 光 环 境 下 基 于 深 度 学 习 的 单 光 子 计 数 成 像去 噪 方 法

Translated title of the contribution: Single-photon counting imaging denoising method based on deep learning in low-light environment

Zhihao Zhao, Zhaohua Yang, Yun Wu, Yuanjin Yu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The high sensitivity of single-pixel single-photon counting imaging makes it extremely advantageous in low-light detection, but the quality of the reconstructed images with this method will still degrade with the weakening of light flux. A single-pixel single-photon counting imaging method based on deep learning denoising is designed to improve the signal-to-noise ratio of reconstructed images in low-light environment. Firstly, a single-pixel single-photon counting imaging system is established. Then, the compressed sensing algorithm is used to reconstruct the image. Finally, the 3D block matching algorithm and the deep learning algorithm are used to denoise the reconstructed image, and the denoising effects of the two algorithms are compared. Ther results show that both of the deep learning algorithm and the 3D block matching algorithm can improve the signal-to-noise ratio of the image. The signal-to-noise ratio of the image obtained by the single-pixel single-photon counting imaging method based on deep learning image denoising is increased by 12. 97 dB, which has a great increase of the image signal-to-noise ratio in the low-light environment, and has a higher signal-to-noise ratio than that obtained by the 3D block matching algorithm. Therefore, this method provides a new idea for improving the quality of the image reconstructed by the single-pixel single-photon imaging system in the low-light environment.

Translated title of the contributionSingle-photon counting imaging denoising method based on deep learning in low-light environment
Original languageChinese (Traditional)
Article number630531
JournalHangkong Xuebao/Acta Aeronautica et Astronautica Sinica
Volume46
Issue number3
DOIs
Publication statusPublished - 15 Feb 2025

Cite this