TY - JOUR
T1 - DI2L
T2 - Cross-modality person re-identification with discriminative feature and information-balanced identity learning
AU - Li, Xin Heng
AU - Liu, Zhen Tao
AU - She, Jinhua
AU - Hirota, Kaoru
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2026/2/28
Y1 - 2026/2/28
N2 - Two key challenges remain unaddressed in Visible-Infrared Person Re-Identification (VI-ReID). The first is the information imbalance between the two modalities because the infrared modality provides significantly less information compared to the visible modality, which leads to overfitting on the visible modality and adversely affects the model's performance. The second is the lack of discriminative features, where conventional methods mainly concentrate on mitigating the modality gap while ignoring the identity-informative features. To address these challenges, we propose DI2L, a novel VI-ReID method composed of a channel augmentation (RCA) module, a weighted part aggregation (DPA) module, and an information-balanced identity learning (I2L) module. Specifically, for model robustness, the RCA module is introduced to generate auxiliary images, avoiding overdependence of the model on the visible modality. The DPA module is designed to obtain discriminative features by exploring the relationship between different parts of the features. The I2L module is proposed to address the challenge of information imbalance while searching for discriminative features. Comprehensive experiments demonstrate the superiority of DI2L against the SOTA methods on SYSU-MM01, RegDB, and LLCM datasets.
AB - Two key challenges remain unaddressed in Visible-Infrared Person Re-Identification (VI-ReID). The first is the information imbalance between the two modalities because the infrared modality provides significantly less information compared to the visible modality, which leads to overfitting on the visible modality and adversely affects the model's performance. The second is the lack of discriminative features, where conventional methods mainly concentrate on mitigating the modality gap while ignoring the identity-informative features. To address these challenges, we propose DI2L, a novel VI-ReID method composed of a channel augmentation (RCA) module, a weighted part aggregation (DPA) module, and an information-balanced identity learning (I2L) module. Specifically, for model robustness, the RCA module is introduced to generate auxiliary images, avoiding overdependence of the model on the visible modality. The DPA module is designed to obtain discriminative features by exploring the relationship between different parts of the features. The I2L module is proposed to address the challenge of information imbalance while searching for discriminative features. Comprehensive experiments demonstrate the superiority of DI2L against the SOTA methods on SYSU-MM01, RegDB, and LLCM datasets.
KW - Cross-modality person re-identification
KW - Information imbalance
KW - Metric learning
KW - Representation learning
UR - https://www.scopus.com/pages/publications/105024532906
U2 - 10.1016/j.neucom.2025.132256
DO - 10.1016/j.neucom.2025.132256
M3 - Article
AN - SCOPUS:105024532906
SN - 0925-2312
VL - 667
JO - Neurocomputing
JF - Neurocomputing
M1 - 132256
ER -