基于眼前节结构信息的 OCT 和 OCTA 图像运动伪影校正

Translated title of the contribution: Structural-Information-Based Motion Artifact Correction for OCT and OCTA Images of Anterior Segments

Haozhe Zhong, Liangqi Cao, Xiao Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Objective Optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) are helpful and powerful optical imaging modalities characterized by high speed, non-invasive diagnosis, and high resolution. They have been widely employed in biomedicine testing, especially in ophthalmic disease diagnosis. With the development of OCT and OCTA, the image quality in ophthalmic fields has been greatly improved, but it still faces a few challenges in practice. One of the serious problems is the artifacts caused by patients’uncontrolled motions during 3D-OCT/OCTA scanning, especially in ophthalmic applications. The motion artifacts result in the low quality of OCT images and erroneous demonstration of 3D structures. Furthermore, motion artifacts can cause misdiagnosis of ophthalmic diseases and hinder studies on ophthalmic diseases that rely on OCT imaging. Some approaches have been proposed for correcting the motion artifacts in 3D-OCT scanning of the fundus. However, there is little research on motion artifact correction in 3D-OCT imaging for anterior segments. Therefore, the motion artifact correction for 3D-OCT scanning of anterior segments is of great significance. We propose an artifact correction method for high-quality 3D-OCT and OCTA imaging of anterior segments based on the inherent structures of the anterior segment. Methods Due to the short spacing between adjacent B-scans during 3D-OCT/OCTA scanning, we assume that the structural information of adjacent B-scans is almost the same and the artifacts are caused by the drifts between adjacent B-scans. Therefore, the motion artifacts in 3D-OCT/OCTA can be corrected by aligning the adjacent B-scans. Fig. 2 presents the proposed motion estimation method for estimating the target’s motion during three-dimensional scanning. The key idea of the method is calculating the relative shift between adjacent B-scans by cross-correlation algorithms along the slow-scanning direction. For comparison, we introduce two other motion estimation methods—method 1 and method 2, with both methods based on the principle in Fig. 2 but different in the calculated objects. The corneal C-scan and iris OCTA obtained from 3D-OCT scanning of the anterior segment are adopted to estimate the motion curves in method 1 and method 2 respectively. The proposed method (method 3) combines the motion curves obtained by the above two methods. Then the motion artifacts are corrected by aligning adjacent B-scans via motion curves obtained by the three methods. The brief calculation of the three methods is shown in Fig 3. Results and Discussions For demonstration, we perform 3D-OCT scanning on anterior segments of anesthetized mice in vivo by a homemade SD-OCT system. Three methods are utilized to estimate the mouse motion respectively. Fig. 4 shows the motion curves obtained by the three methods. After anesthesia, the mouse only has regular respiratory movement during testing. The motion curve obtained by method 1 only shows the mouse respiration rhythm in the middle segments, while the curve calculated by method 2 reflects the rhythm at both ends but experiences a steep decline in the middle. The proposed method obtains motion curves by combining the accurate segments of motion curves of the former two methods. The combined results demonstrate the biological respiratory rhythm in the whole curve, as shown in Fig. 4. Then the OCT/OCTA images are corrected according to motion curves obtained by the three methods. The original and corrected images of corneal C-scan, iris OCTA, enface, and 3D volume are compared in Figs. 5 ‒ 8. Methods 1 and 2 cannot correct the motion artifacts completely, which is because the correlation between biological structures along the slow axis is insufficient. By taking method 2 as an example, in OCTA images, the vessels at the middle of the iris are almost horizontally distributed, which cannot provide sufficient correlation for calculating the shift between B-scans by cross-correlation algorithms. The same thing happens to method 1. The proposed method yields a sound global artifact correction effect for the anterior segment without overcorrection and under-correction. Compared with the hardware solutions which rely on scanning laser ophthalmoscopy for correcting motion artifacts, the proposed method does not require additional equipment and reduces the complexity of the OCT system significantly. Additionally, due to the widespread symmetrical structure in biological tissues, the idea of estimating motion curves based on structural information may be employed to correct the motion artifacts for OCT imaging of other biological samples. Conclusions We propose a motion artifact correction method based on anterior segment structure information for OCT and OCTA. The motion curve is estimated by cross-correlation algorithms. Then the artifacts and deviations caused by motion during 3D-OCT scanning are corrected according to the estimated motion curve. In experiments, the proposed method is demonstrated to correct motion artifacts in images of corneal C-scan, iris OCTA, enface, and three-dimensional volume. Finally, our method yields a sound global artifact correction effect for anterior segments without overcorrection and under-correction and provides a low-cost and effective artifact correction solution.

Translated title of the contributionStructural-Information-Based Motion Artifact Correction for OCT and OCTA Images of Anterior Segments
Original languageChinese (Traditional)
Article number1917001
JournalGuangxue Xuebao/Acta Optica Sinica
Volume44
Issue number19
DOIs
Publication statusPublished - Oct 2024

Fingerprint

Dive into the research topics of 'Structural-Information-Based Motion Artifact Correction for OCT and OCTA Images of Anterior Segments'. Together they form a unique fingerprint.

Cite this