TY - JOUR
T1 - Attention guided for partial domain adaptation
AU - Zhang, Changchun
AU - Zhao, Qingjie
N1 - Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2021/2/8
Y1 - 2021/2/8
N2 - How to effectively extract feature representations from unlabeled samples from the target domain is critical for unsupervised domain adaptation, specific for partial domain adaptation where source label space is a super-space of target label space, as it helps reduce large performance gap due to domain shift or domain bias. In this paper, a novel partial domain adaptation method named Multiple Self-Attention Networks (MSAN) based on adversarial learning is proposed. Unlike most existing partial domain adaptation methods which only focus on high-level features, MSAN focuses on effective high-level context features and low-level structural features from unlabeled target data with the help of labeled source data. Specifically, we present multiple self-attention network, a general approach to learning more fine-grained and transferable features in a manner of gradual feature enhancement so that domain shift can be relatively decreased to boost the model generalization power. Comprehensive experiments on Office-31 and Office-Home datasets demonstrate that the proposed approach significantly improves upon representation partial domain adaptation methods to yield state-of-the-art results for various partial transfer tasks.
AB - How to effectively extract feature representations from unlabeled samples from the target domain is critical for unsupervised domain adaptation, specific for partial domain adaptation where source label space is a super-space of target label space, as it helps reduce large performance gap due to domain shift or domain bias. In this paper, a novel partial domain adaptation method named Multiple Self-Attention Networks (MSAN) based on adversarial learning is proposed. Unlike most existing partial domain adaptation methods which only focus on high-level features, MSAN focuses on effective high-level context features and low-level structural features from unlabeled target data with the help of labeled source data. Specifically, we present multiple self-attention network, a general approach to learning more fine-grained and transferable features in a manner of gradual feature enhancement so that domain shift can be relatively decreased to boost the model generalization power. Comprehensive experiments on Office-31 and Office-Home datasets demonstrate that the proposed approach significantly improves upon representation partial domain adaptation methods to yield state-of-the-art results for various partial transfer tasks.
KW - Adversarial Networks
KW - Attention Mechanism
KW - Partial Domain Adaptation
UR - http://www.scopus.com/inward/record.url?scp=85090347770&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2020.08.103
DO - 10.1016/j.ins.2020.08.103
M3 - Article
AN - SCOPUS:85090347770
SN - 0020-0255
VL - 547
SP - 860
EP - 869
JO - Information Sciences
JF - Information Sciences
ER -