Multimodal super-resolved q-space deep learning

Yu Qin, Yuxing Li, Zhizheng Zhuo, Zhiwen Liu*, Yaou Liu, Chuyang Ye

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

12 引用 (Scopus)

摘要

Super-resolved q-space deep learning (SR-q-DL) has been developed to estimate high-resolution (HR) tissue microstructure maps from low-quality diffusion magnetic resonance imaging (dMRI) scans acquired with a reduced number of diffusion gradients and low spatial resolution, where deep networks are designed for the estimation. However, existing methods do not exploit HR information from other modalities, which are generally acquired together with dMRI and could provide additional useful information for HR tissue microstructure estimation. In this work, we extend SR-q-DL and propose multimodal SR-q-DL, where information in low-resolution (LR) dMRI is combined with HR information from another modality for HR tissue microstructure estimation. Because the HR modality may not be as sensitive to tissue microstructure as dMRI, direct concatenation of multimodal information does not necessarily lead to improved estimation performance. Since existing deep networks for HR tissue microstructure estimation are patch-based and use redundant information in the spatial domain to enhance the spatial resolution, the HR information in the other modality could inform the deep networks about what input voxels are relevant for the computation of tissue microstructure. Thus, we propose to incorporate the HR information from the HR modality by designing an attention module that guides the computation of HR tissue microstructure from LR dMRI. Specifically, the attention module is integrated with the patch-based SR-q-DL framework that exploits the sparsity of diffusion signals. The sparse representation of the LR diffusion signals in the input patch is first computed with a network component that unrolls an iterative process for sparse reconstruction. Then, the proposed attention module computes a relevance map from the HR modality with sequential convolutional layers. The relevance map indicates the relevance of the LR sparse representation at each voxel for computing the patch of HR tissue microstructure. The relevance is applied to the LR sparse representation with voxelwise multiplication, and the weighted LR sparse representation is used to compute HR tissue microstructure with another network component that allows resolution enhancement. All weights in the proposed network for multimodal SR-q-DL are jointly learned and the estimation is end-to-end. To evaluate the proposed method, we performed experiments on brain dMRI scans together with images of additional HR modalities. In the experiments, the proposed method was applied to the estimation of tissue microstructure measures for different datasets and advanced biophysical models, where the benefit of incorporating multimodal information using the proposed method is shown.

源语言英语
文章编号102085
期刊Medical Image Analysis
71
DOI
出版状态已出版 - 7月 2021

指纹

探究 'Multimodal super-resolved q-space deep learning' 的科研主题。它们共同构成独一无二的指纹。

引用此