Generalized Domain Conditioned Adaptation Network

Shuang Li, Binhui Xie, Qiuxia Lin, Chi Harold Liu*, Gao Huang, Guoren Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

45 引用 (Scopus)

摘要

Domain adaptation (DA) attempts to transfer knowledge learned in the labeled source domain to the unlabeled but related target domain without requiring large amounts of target supervision. Recent advances in DA mainly proceed by aligning the source and target distributions. Despite the significant success, the adaptation performance still degrades accordingly when the source and target domains encounter a large distribution discrepancy. We consider this limitation may attribute to the insufficient exploration of domain-specialized features because most studies merely concentrate on domain-general feature learning in task-specific layers and integrate totally-shared convolutional networks (convnets) to generate common features for both domains. In this paper, we relax the completely-shared convnets assumption adopted by previous DA methods and propose Domain Conditioned Adaptation Network (DCAN), which introduces domain conditioned channel attention module with a multi-path structure to separately excite channel activation for each domain. Such a partially-shared convnets module allows domain-specialized features in low-level to be explored appropriately. Further, given the knowledge transferability varying along with convolutional layers, we develop Generalized Domain Conditioned Adaptation Network (GDCAN) to automatically determine whether domain channel activations should be separately modeled in each attention module. Afterward, the critical domain-specialized knowledge could be adaptively extracted according to the domain statistic gaps. As far as we know, this is the first work to explore the domain-wise convolutional channel activations separately for deep DA networks. Additionally, to effectively match high-level feature distributions across domains, we consider deploying feature adaptation blocks after task-specific layers, which can explicitly mitigate the domain discrepancy. Extensive experiments on four cross-domain benchmarks, including DomainNet, Office-Home, Office-31, and ImageCLEF, demonstrate the proposed approaches outperform the existing methods by a large margin, especially on the large-scale challenging dataset. The code and models are available at https://github.com/BIT-DA/GDCAN.

源语言英语
页(从-至)4093-4109
页数17
期刊IEEE Transactions on Pattern Analysis and Machine Intelligence
44
8
DOI
出版状态已出版 - 1 8月 2022

指纹

探究 'Generalized Domain Conditioned Adaptation Network' 的科研主题。它们共同构成独一无二的指纹。

引用此