TY - GEN
T1 - Interaction and Alignment for Visible-Infrared Person Re-Identification
AU - Gong, Jiahao
AU - Zhao, Sanyuan
AU - Lam, Kin Man
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Visible-Infrared Person Re-Identification (VI-ReID) is a challenging person matching problem and is also a practical solution for intelligent surveillance systems at night. Due to the heterogeneity between visible and infrared modalities, the retrieval performance is seriously damaged. To address the issue of the discrepancy of the information between visible and infrared modalities, many works have been proposed. However, the relationship between cross-modality samples has rarely been mined. In this paper, we propose a Cross-modality Interaction and Alignment (CIA) module to solve the discrepancy problem. Through transforming the information between different modalities, the module guides the network to capture the modality-shared feature, which is beneficial to address the cross-modality discrepancy. Meanwhile, to better supervise the network, an enhanced contrastive loss is introduced. Contributed by the further optimization in the distance between intra-class samples, the network gains more effective supervision. Extensive experiments on two benchmark datasets show that our method achieves an excellent performance in VI-ReID.
AB - Visible-Infrared Person Re-Identification (VI-ReID) is a challenging person matching problem and is also a practical solution for intelligent surveillance systems at night. Due to the heterogeneity between visible and infrared modalities, the retrieval performance is seriously damaged. To address the issue of the discrepancy of the information between visible and infrared modalities, many works have been proposed. However, the relationship between cross-modality samples has rarely been mined. In this paper, we propose a Cross-modality Interaction and Alignment (CIA) module to solve the discrepancy problem. Through transforming the information between different modalities, the module guides the network to capture the modality-shared feature, which is beneficial to address the cross-modality discrepancy. Meanwhile, to better supervise the network, an enhanced contrastive loss is introduced. Contributed by the further optimization in the distance between intra-class samples, the network gains more effective supervision. Extensive experiments on two benchmark datasets show that our method achieves an excellent performance in VI-ReID.
UR - http://www.scopus.com/inward/record.url?scp=85143612081&partnerID=8YFLogxK
U2 - 10.1109/ICPR56361.2022.9956505
DO - 10.1109/ICPR56361.2022.9956505
M3 - Conference contribution
AN - SCOPUS:85143612081
T3 - Proceedings - International Conference on Pattern Recognition
SP - 2253
EP - 2259
BT - 2022 26th International Conference on Pattern Recognition, ICPR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th International Conference on Pattern Recognition, ICPR 2022
Y2 - 21 August 2022 through 25 August 2022
ER -