Transferable attention networks for adversarial domain adaptation

Changchun Zhang, Qingjie Zhao*, Yu Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

31 引用 (Scopus)

摘要

Domain adaptation is one of the fundamental challenges in transfer learning. How to effectively transfer knowledge from labeled source domain to unlabeled target domain is critical for domain adaptation, as it benefits to reduce the considerable performance gap due to domain shift. Existing methods of domain adaptation address this issue via matching the global features across domains. However, not all features are transferable for domain adaptation, while forcefully matching the untransferable features may lead to negative transfer. In this paper, we propose a novel method dubbed transferable attention networks (TAN) to address this issue. The proposed TAN focuses on the feature alignment by utilizing adversarial optimization. Specifically, we utilize the self-attention mechanism to weight extracted features, such that the influence of untransferable features can be effectively eliminated. Meanwhile, to exploit the complex multi-modal structures of domain adaptation, we use learned features and classifier predictions as the condition to train the adversarial networks. Furthermore, we further propose that the accurately transferable features should enable domain discrepancy to minimum. Three loss functions are introduced into the adversarial networks: classification loss, attention transfer loss, and condition transfer loss. Extensive experiments on Office-31, ImageCLEF-DA, Office-Home, and VisDA-2017 datasets testify that the proposed approach yields state-of-the-art results.

源语言英语
页(从-至)422-433
页数12
期刊Information Sciences
539
DOI
出版状态已出版 - 10月 2020

指纹

探究 'Transferable attention networks for adversarial domain adaptation' 的科研主题。它们共同构成独一无二的指纹。

引用此