TY - JOUR
T1 - Source-free domain adaptation via dynamic pseudo labeling and Self-supervision
AU - Ma, Qiankun
AU - Zeng, Jie
AU - Zhang, Jianjia
AU - Zu, Chen
AU - Wu, Xi
AU - Zhou, Jiliu
AU - Chen, Jie
AU - Wang, Yan
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/12
Y1 - 2024/12
N2 - Recently, unsupervised domain adaptation (UDA) has attracted extensive interest in relieving the greedy requirement of vanilla deep learning for labeled data. It seeks for a solution to adapt the knowledge from a well-labeled training dataset (source domain) to another unlabeled target dataset (target domain). However, in some practical scenarios, the source domain data is inaccessible for a variety of reasons, and only a model trained on it can be provided, thus deriving a more challenging task, i.e., source-free unsupervised domain adaptation (SFUDA). Some pseudo labeling-based methods have been proposed to solve it by predicting pseudo labels for the unlabeled target domain data. Nevertheless, incorrectly designated pseudo labels will impose an adverse impact on the network adaptation. To alleviate this issue, we propose a dynamic confidence-based pseudo labeling strategy for SFUDA in this paper. Unlike those methods that first rigidly assign pseudo labels to all target domain data and then try to weaken the effect of incorrect pseudo labels in training, we proactively label the target samples with higher confidence in a dynamic manner. To further relieve the impact of incorrect pseudo labels, we harness the collaborative learning to constrain the consistency of the network and impose an additional soft supervision. Besides, we also investigate the possible problem brought by our labeling strategy, i.e., the neglect of wavering samples near the decision boundary, and solve it by injecting the self-supervised learning into our model. Experiments on three UDA benchmark datasets demonstrate the state-of-the-art performance of our proposed method. The code is publicly available at https://github.com/meowpass/DPLS.
AB - Recently, unsupervised domain adaptation (UDA) has attracted extensive interest in relieving the greedy requirement of vanilla deep learning for labeled data. It seeks for a solution to adapt the knowledge from a well-labeled training dataset (source domain) to another unlabeled target dataset (target domain). However, in some practical scenarios, the source domain data is inaccessible for a variety of reasons, and only a model trained on it can be provided, thus deriving a more challenging task, i.e., source-free unsupervised domain adaptation (SFUDA). Some pseudo labeling-based methods have been proposed to solve it by predicting pseudo labels for the unlabeled target domain data. Nevertheless, incorrectly designated pseudo labels will impose an adverse impact on the network adaptation. To alleviate this issue, we propose a dynamic confidence-based pseudo labeling strategy for SFUDA in this paper. Unlike those methods that first rigidly assign pseudo labels to all target domain data and then try to weaken the effect of incorrect pseudo labels in training, we proactively label the target samples with higher confidence in a dynamic manner. To further relieve the impact of incorrect pseudo labels, we harness the collaborative learning to constrain the consistency of the network and impose an additional soft supervision. Besides, we also investigate the possible problem brought by our labeling strategy, i.e., the neglect of wavering samples near the decision boundary, and solve it by injecting the self-supervised learning into our model. Experiments on three UDA benchmark datasets demonstrate the state-of-the-art performance of our proposed method. The code is publicly available at https://github.com/meowpass/DPLS.
KW - Image classification
KW - Self-supervised learning
KW - Source-free domain adaptation
UR - http://www.scopus.com/inward/record.url?scp=85198941696&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2024.110793
DO - 10.1016/j.patcog.2024.110793
M3 - Article
AN - SCOPUS:85198941696
SN - 0031-3203
VL - 156
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 110793
ER -