Adversarial learning for mono- or multi-modal registration

Jingfan Fan, Xiaohuan Cao, Qian Wang, Pew Thian Yap*, Dinggang Shen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

125 引用 (Scopus)

摘要

This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.

源语言英语
文章编号101545
期刊Medical Image Analysis
58
DOI
出版状态已出版 - 12月 2019
已对外发布

指纹

探究 'Adversarial learning for mono- or multi-modal registration' 的科研主题。它们共同构成独一无二的指纹。

引用此