TY - JOUR
T1 - Meta-Reweighted Regularization for Unsupervised Domain Adaptation
AU - Li, Shuang
AU - Ma, Wenxuan
AU - Zhang, Jinming
AU - Liu, Chi Harold
AU - Liang, Jian
AU - Wang, Guoren
N1 - Publisher Copyright:
© 1989-2012 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Unsupervised domain adaptation (UDA) enables knowledge transfer from a labeled source domain to an unlabeled target domain by reducing the cross-domain distribution discrepancy, and the adversarial learning based paradigm has achieved remarkable success. On top of the derived domain-invariant feature representations, a promising stream of recent works seeks to further regularize the classification decision boundary via self-training to learn target adaptive classifier with pseudo-labeled target samples. However, since the pseudo labels are inevitably noisy, most of prior methods focus on manually designing elaborate target selection algorithms or optimization objectives to combat the negative effect caused by the incorrect pseudo labels. Different from them, in this paper, we propose a simple and powerful meta-learning based target-reweighting regularization algorithm, called MetaReg, which regularizes the model training by learning to reweight the noisy pseudo-labeled target samples. Specifically, MetaReg is motivated by the intuition that an ideal target classifier trained on correct target pseudo labels should make small classification errors on target-like source samples. Therefore, we explicitly define a meta reweighting problem that aims to find the optimal weights for different target pseudo labels by minimizing the classification loss on a designed validation set, a class-balanced set consisting of source samples that are most similar to target ones. Note that the optimization problem can be solved efficiently with a simplified approximation technique. As a result, the automatically learned optimal weights are utilized to reweight pseudo-labeled target samples, and regularize the model learning by target supervision with the learned different importance. Comprehensive experiments on several cross-domain image and text datasets verify that MetaReg could outperform the non-regularized UDA counterparts with state-of-the-art performance. Code is available at https://github.com/BIT-DA/MetaReg.
AB - Unsupervised domain adaptation (UDA) enables knowledge transfer from a labeled source domain to an unlabeled target domain by reducing the cross-domain distribution discrepancy, and the adversarial learning based paradigm has achieved remarkable success. On top of the derived domain-invariant feature representations, a promising stream of recent works seeks to further regularize the classification decision boundary via self-training to learn target adaptive classifier with pseudo-labeled target samples. However, since the pseudo labels are inevitably noisy, most of prior methods focus on manually designing elaborate target selection algorithms or optimization objectives to combat the negative effect caused by the incorrect pseudo labels. Different from them, in this paper, we propose a simple and powerful meta-learning based target-reweighting regularization algorithm, called MetaReg, which regularizes the model training by learning to reweight the noisy pseudo-labeled target samples. Specifically, MetaReg is motivated by the intuition that an ideal target classifier trained on correct target pseudo labels should make small classification errors on target-like source samples. Therefore, we explicitly define a meta reweighting problem that aims to find the optimal weights for different target pseudo labels by minimizing the classification loss on a designed validation set, a class-balanced set consisting of source samples that are most similar to target ones. Note that the optimization problem can be solved efficiently with a simplified approximation technique. As a result, the automatically learned optimal weights are utilized to reweight pseudo-labeled target samples, and regularize the model learning by target supervision with the learned different importance. Comprehensive experiments on several cross-domain image and text datasets verify that MetaReg could outperform the non-regularized UDA counterparts with state-of-the-art performance. Code is available at https://github.com/BIT-DA/MetaReg.
KW - Domain adaptation
KW - adversarial learning
KW - meta learning
KW - sample reweighting
KW - self training
UR - http://www.scopus.com/inward/record.url?scp=85115717943&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2021.3114536
DO - 10.1109/TKDE.2021.3114536
M3 - Article
AN - SCOPUS:85115717943
SN - 1041-4347
VL - 35
SP - 2781
EP - 2795
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 3
ER -