TY - GEN
T1 - Joint Domain Alignment and Adversarial Learning for Domain Generalization
AU - Li, Shanshan
AU - Zhao, Qingjie
AU - Wang, Lei
AU - Liu, Wangwang
AU - Zhang, Changchun
AU - Zou, Yuanbing
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Domain generalization aims to extract a classifier model from multiple observed source domains, and then can be applied to unseen target domains. The primary challenge in domain generalization lies in how to extract a domain-invariant representation. To tackle this challenge, we propose a multi-source domain generalization network called Joint Domain Alignment and Adversarial Learning (JDAAL), which learns a universal domain-invariant representation by aligning the feature distribution of multiple observed source domains based on multi-kernel maximum mean discrepancy. We adopt an optimal multi-kernel selection strategy that further enhances the effectiveness of embedding matching and approximates different distributions in the domain-invariant feature space. Additionally, we use an adversarial auto-encoder to bound the multi-kernel maximum mean discrepancy for rendering the feature distribution of all observed source domains more indistinguishable. In this way, the domain-invariant representation generated by JDAAL can improve the adaptability to unseen target domains. Extensive experiments on benchmark cross-domain datasets demonstrate the superiority of the proposed method.
AB - Domain generalization aims to extract a classifier model from multiple observed source domains, and then can be applied to unseen target domains. The primary challenge in domain generalization lies in how to extract a domain-invariant representation. To tackle this challenge, we propose a multi-source domain generalization network called Joint Domain Alignment and Adversarial Learning (JDAAL), which learns a universal domain-invariant representation by aligning the feature distribution of multiple observed source domains based on multi-kernel maximum mean discrepancy. We adopt an optimal multi-kernel selection strategy that further enhances the effectiveness of embedding matching and approximates different distributions in the domain-invariant feature space. Additionally, we use an adversarial auto-encoder to bound the multi-kernel maximum mean discrepancy for rendering the feature distribution of all observed source domains more indistinguishable. In this way, the domain-invariant representation generated by JDAAL can improve the adaptability to unseen target domains. Extensive experiments on benchmark cross-domain datasets demonstrate the superiority of the proposed method.
KW - Adversarial learning
KW - Domain alignment
KW - Domain generalization
UR - http://www.scopus.com/inward/record.url?scp=85187640846&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-0885-7_12
DO - 10.1007/978-981-97-0885-7_12
M3 - Conference contribution
AN - SCOPUS:85187640846
SN - 9789819708840
T3 - Communications in Computer and Information Science
SP - 132
EP - 146
BT - Cognitive Computation and Systems - 2nd International Conference, ICCCS 2023, Revised Selected Papers
A2 - Sun, Fuchun
A2 - Li, Jianmin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd International Conference on Cognitive Computation and Systems, ICCCS 2023
Y2 - 14 October 2023 through 15 October 2023
ER -