Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification

Yiyuan Zhang, Yuhao Kang, Sanyuan Zhao*, Jianbing Shen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

20 引用 (Scopus)

摘要

Visible-Infrared person Re-Identification (VI-ReID) conducts comprehensive identity analysis on non-overlapping visible and infrared camera sets for intelligent surveillance systems, which face huge instance variations derived from modality discrepancy. Existing methods employ different kinds of network structure to extract modality-invariant features. Differently, we propose a novel framework, named Dual-Semantic Consistency Learning Network (DSCNet), which attributes modality discrepancy to channel-level semantic inconsistency. DSCNet optimizes channel consistency from two aspects, fine-grained inter-channel semantics, and comprehensive inter-modality semantics. Furthermore, we propose Joint Semantics Metric Learning to simultaneously optimize the distribution of the channel-and-modality feature embeddings. It jointly exploits the correlation between channel-specific and modality-specific semantics in a fine-grained manner. We conduct a series of experiments on the SYSU-MM01 and RegDB datasets, which validates that DSCNet delivers superiority compared with current state-of-the-art methods. On the more challenging SYSU-MM01 dataset, our network can achieve 73.89% Rank-1 accuracy and 69.47% mAP value. Our code is available at https://github.com/bitreidgroup/DSCNet.

源语言英语
页(从-至)1554-1565
页数12
期刊IEEE Transactions on Information Forensics and Security
18
DOI
出版状态已出版 - 2023

指纹

探究 'Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification' 的科研主题。它们共同构成独一无二的指纹。

引用此