TY - JOUR
T1 - Multimodal super-resolved q-space deep learning
AU - Qin, Yu
AU - Li, Yuxing
AU - Zhuo, Zhizheng
AU - Liu, Zhiwen
AU - Liu, Yaou
AU - Ye, Chuyang
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/7
Y1 - 2021/7
N2 - Super-resolved q-space deep learning (SR-q-DL) has been developed to estimate high-resolution (HR) tissue microstructure maps from low-quality diffusion magnetic resonance imaging (dMRI) scans acquired with a reduced number of diffusion gradients and low spatial resolution, where deep networks are designed for the estimation. However, existing methods do not exploit HR information from other modalities, which are generally acquired together with dMRI and could provide additional useful information for HR tissue microstructure estimation. In this work, we extend SR-q-DL and propose multimodal SR-q-DL, where information in low-resolution (LR) dMRI is combined with HR information from another modality for HR tissue microstructure estimation. Because the HR modality may not be as sensitive to tissue microstructure as dMRI, direct concatenation of multimodal information does not necessarily lead to improved estimation performance. Since existing deep networks for HR tissue microstructure estimation are patch-based and use redundant information in the spatial domain to enhance the spatial resolution, the HR information in the other modality could inform the deep networks about what input voxels are relevant for the computation of tissue microstructure. Thus, we propose to incorporate the HR information from the HR modality by designing an attention module that guides the computation of HR tissue microstructure from LR dMRI. Specifically, the attention module is integrated with the patch-based SR-q-DL framework that exploits the sparsity of diffusion signals. The sparse representation of the LR diffusion signals in the input patch is first computed with a network component that unrolls an iterative process for sparse reconstruction. Then, the proposed attention module computes a relevance map from the HR modality with sequential convolutional layers. The relevance map indicates the relevance of the LR sparse representation at each voxel for computing the patch of HR tissue microstructure. The relevance is applied to the LR sparse representation with voxelwise multiplication, and the weighted LR sparse representation is used to compute HR tissue microstructure with another network component that allows resolution enhancement. All weights in the proposed network for multimodal SR-q-DL are jointly learned and the estimation is end-to-end. To evaluate the proposed method, we performed experiments on brain dMRI scans together with images of additional HR modalities. In the experiments, the proposed method was applied to the estimation of tissue microstructure measures for different datasets and advanced biophysical models, where the benefit of incorporating multimodal information using the proposed method is shown.
AB - Super-resolved q-space deep learning (SR-q-DL) has been developed to estimate high-resolution (HR) tissue microstructure maps from low-quality diffusion magnetic resonance imaging (dMRI) scans acquired with a reduced number of diffusion gradients and low spatial resolution, where deep networks are designed for the estimation. However, existing methods do not exploit HR information from other modalities, which are generally acquired together with dMRI and could provide additional useful information for HR tissue microstructure estimation. In this work, we extend SR-q-DL and propose multimodal SR-q-DL, where information in low-resolution (LR) dMRI is combined with HR information from another modality for HR tissue microstructure estimation. Because the HR modality may not be as sensitive to tissue microstructure as dMRI, direct concatenation of multimodal information does not necessarily lead to improved estimation performance. Since existing deep networks for HR tissue microstructure estimation are patch-based and use redundant information in the spatial domain to enhance the spatial resolution, the HR information in the other modality could inform the deep networks about what input voxels are relevant for the computation of tissue microstructure. Thus, we propose to incorporate the HR information from the HR modality by designing an attention module that guides the computation of HR tissue microstructure from LR dMRI. Specifically, the attention module is integrated with the patch-based SR-q-DL framework that exploits the sparsity of diffusion signals. The sparse representation of the LR diffusion signals in the input patch is first computed with a network component that unrolls an iterative process for sparse reconstruction. Then, the proposed attention module computes a relevance map from the HR modality with sequential convolutional layers. The relevance map indicates the relevance of the LR sparse representation at each voxel for computing the patch of HR tissue microstructure. The relevance is applied to the LR sparse representation with voxelwise multiplication, and the weighted LR sparse representation is used to compute HR tissue microstructure with another network component that allows resolution enhancement. All weights in the proposed network for multimodal SR-q-DL are jointly learned and the estimation is end-to-end. To evaluate the proposed method, we performed experiments on brain dMRI scans together with images of additional HR modalities. In the experiments, the proposed method was applied to the estimation of tissue microstructure measures for different datasets and advanced biophysical models, where the benefit of incorporating multimodal information using the proposed method is shown.
KW - Diffusion MRI
KW - Multimodal information
KW - Resolution enhancement
KW - Tissue microstructure
UR - http://www.scopus.com/inward/record.url?scp=85105476778&partnerID=8YFLogxK
U2 - 10.1016/j.media.2021.102085
DO - 10.1016/j.media.2021.102085
M3 - Article
C2 - 33971575
AN - SCOPUS:85105476778
SN - 1361-8415
VL - 71
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102085
ER -