TY - JOUR
T1 - SFCLI-Net
T2 - Spatial-frequency collaborative learning interpolation network for Computed Tomography slice synthesis
AU - Li, Wentao
AU - Song, Hong
AU - Ai, Danni
AU - Shi, Jieliang
AU - Fan, Jingfan
AU - Xiao, Deqiang
AU - Fu, Tianyu
AU - Lin, Yucong
AU - Wu, Wencan
AU - Yang, Jian
N1 - Publisher Copyright:
© 2025
PY - 2025/5/5
Y1 - 2025/5/5
N2 - To suppress noise and reduce patient radiation dose, Computed Tomography (CT) images often exhibit anisotropy, typically manifested as sparser slices in the axial direction compared to other directions. Slice interpolation can effectively increase the axial resolution to mitigate this phenomenon. Existing convolution-based methods tend to extract low-frequency information first and then fit high-frequency information during training, which makes recovering high-frequency details more challenging. The core of slice interpolation is to recover high-frequency information from degraded images, and such biases can negatively impact the interpolation process. To address this issue, we propose a Spatial-Frequency Collaborative Learning Interpolation Network (SFCLI-Net), which combines spatial and frequency domain information for CT slice synthesis. The network consists of two main components: the Spatial-Frequency Swin (SF-Swin) block and the Multi-view block. More specifically, the SF-Swin block includes both spatial and frequency domain branches, enabling complementary information exchange between these domains by leveraging the global information extraction capabilities of the Swin Transformer layer. The Multi-view block integrates sagittal and coronal view information into the primary axial view to further enhance interpolation performance. Experimental results demonstrate that our method achieves superior interpolation performance on both our private and public datasets, outperforming state-of-the-art methods.
AB - To suppress noise and reduce patient radiation dose, Computed Tomography (CT) images often exhibit anisotropy, typically manifested as sparser slices in the axial direction compared to other directions. Slice interpolation can effectively increase the axial resolution to mitigate this phenomenon. Existing convolution-based methods tend to extract low-frequency information first and then fit high-frequency information during training, which makes recovering high-frequency details more challenging. The core of slice interpolation is to recover high-frequency information from degraded images, and such biases can negatively impact the interpolation process. To address this issue, we propose a Spatial-Frequency Collaborative Learning Interpolation Network (SFCLI-Net), which combines spatial and frequency domain information for CT slice synthesis. The network consists of two main components: the Spatial-Frequency Swin (SF-Swin) block and the Multi-view block. More specifically, the SF-Swin block includes both spatial and frequency domain branches, enabling complementary information exchange between these domains by leveraging the global information extraction capabilities of the Swin Transformer layer. The Multi-view block integrates sagittal and coronal view information into the primary axial view to further enhance interpolation performance. Experimental results demonstrate that our method achieves superior interpolation performance on both our private and public datasets, outperforming state-of-the-art methods.
KW - Computed tomography images
KW - Frequency domain
KW - Slice interpolation
KW - Super-resolution reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85217067261&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2025.126602
DO - 10.1016/j.eswa.2025.126602
M3 - Article
AN - SCOPUS:85217067261
SN - 0957-4174
VL - 272
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 126602
ER -