TY - JOUR
T1 - Adaptive Knowledge Distillation for High-Quality Unsupervised MRI Reconstruction with Model-Driven Priors
AU - Wu, Zhengliang
AU - Li, Xuesong
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Magnetic Resonance Imaging (MRI) reconstruction has made significant progress with the introduction of Deep Learning (DL) technology combined with Compressed Sensing (CS). However, most existing methods require large fully sampled training datasets to supervise the training process, which may be unavailable in many applications. Current unsupervised models also show limitations in performance or speed and may face unaligned distributions during testing. This paper proposes an unsupervised method to train competitive reconstruction models that can generate high-quality samples in an end-to-end style. Firstly teacher models are trained by filling the re-undersampled images and compared with the undersampled images in a self-supervised manner. The teacher models are then distilled to train another cascade model that can leverage the entire undersampled k-space during its training and testing. Additionally, we propose an adaptive distillation method to re-weight the samples based on the variance of teachers, which represents the confidence of the reconstruction results, to improve the quality of distillation. Experimental results on multiple datasets demonstrate that our method significantly accelerates the inference process while preserving or even improving the performance compared to the teacher model. In our tests, the distilled models show 5%-10% improvements in PSNR and SSIM compared with no distillation and are 10 times faster than the teacher.
AB - Magnetic Resonance Imaging (MRI) reconstruction has made significant progress with the introduction of Deep Learning (DL) technology combined with Compressed Sensing (CS). However, most existing methods require large fully sampled training datasets to supervise the training process, which may be unavailable in many applications. Current unsupervised models also show limitations in performance or speed and may face unaligned distributions during testing. This paper proposes an unsupervised method to train competitive reconstruction models that can generate high-quality samples in an end-to-end style. Firstly teacher models are trained by filling the re-undersampled images and compared with the undersampled images in a self-supervised manner. The teacher models are then distilled to train another cascade model that can leverage the entire undersampled k-space during its training and testing. Additionally, we propose an adaptive distillation method to re-weight the samples based on the variance of teachers, which represents the confidence of the reconstruction results, to improve the quality of distillation. Experimental results on multiple datasets demonstrate that our method significantly accelerates the inference process while preserving or even improving the performance compared to the teacher model. In our tests, the distilled models show 5%-10% improvements in PSNR and SSIM compared with no distillation and are 10 times faster than the teacher.
KW - Compressed sensing
KW - MRI reconstruction
KW - knowledge distillation
KW - unsupervised reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85187258791&partnerID=8YFLogxK
U2 - 10.1109/JBHI.2024.3365784
DO - 10.1109/JBHI.2024.3365784
M3 - Article
C2 - 38349826
AN - SCOPUS:85187258791
SN - 2168-2194
VL - 28
SP - 3571
EP - 3582
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
IS - 6
ER -