跳到主要导航 跳到搜索 跳到主要内容

DI2L: Cross-modality person re-identification with discriminative feature and information-balanced identity learning

  • Xin Heng Li
  • , Zhen Tao Liu*
  • , Jinhua She
  • , Kaoru Hirota
  • *此作品的通讯作者
  • China University of Geosciences, Wuhan
  • Ministry of Education in China
  • Tokyo University of Technology
  • Institute of Science Tokyo

科研成果: 期刊稿件文章同行评审

摘要

Two key challenges remain unaddressed in Visible-Infrared Person Re-Identification (VI-ReID). The first is the information imbalance between the two modalities because the infrared modality provides significantly less information compared to the visible modality, which leads to overfitting on the visible modality and adversely affects the model's performance. The second is the lack of discriminative features, where conventional methods mainly concentrate on mitigating the modality gap while ignoring the identity-informative features. To address these challenges, we propose DI2L, a novel VI-ReID method composed of a channel augmentation (RCA) module, a weighted part aggregation (DPA) module, and an information-balanced identity learning (I2L) module. Specifically, for model robustness, the RCA module is introduced to generate auxiliary images, avoiding overdependence of the model on the visible modality. The DPA module is designed to obtain discriminative features by exploring the relationship between different parts of the features. The I2L module is proposed to address the challenge of information imbalance while searching for discriminative features. Comprehensive experiments demonstrate the superiority of DI2L against the SOTA methods on SYSU-MM01, RegDB, and LLCM datasets.

源语言英语
文章编号132256
期刊Neurocomputing
667
DOI
出版状态已出版 - 28 2月 2026
已对外发布

指纹

探究 'DI2L: Cross-modality person re-identification with discriminative feature and information-balanced identity learning' 的科研主题。它们共同构成独一无二的指纹。

引用此