跳到主要导航 跳到搜索 跳到主要内容

CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification

  • Xiaoyan Yu
  • , Neng Dong*
  • , Liehuang Zhu
  • , Hao Peng
  • , Dapeng Tao
  • *此作品的通讯作者
  • Beijing Institute of Technology
  • Nanjing University of Science and Technology
  • Beihang University
  • Yunnan University

科研成果: 期刊稿件文章同行评审

摘要

Visible-infrared person re-identification (VIReID) primarily deals with matching identities across person images from different modalities. Due to the modality gap between visible and infrared images, cross-modality identity matching poses significant challenges. Recognizing that high-level semantics of pedestrian appearance, such as gender, shape, and clothing style, remain consistent across modalities, this paper intends to bridge the modality gap by infusing visual features with high-level semantics. Given the capability of Contrastive Language-Image Pre-training (CLIP) to sense high-level semantic information corresponding to visual representations, we explore the application of CLIP within the domain of VIReID. Consequently, we propose a CLIP-Driven Semantic Discovery Network (CSDN) that consists of Modality-specific Prompt Learner, Semantic Information Integration (SII), and High-level Semantic Embedding (HSE). Specifically, considering the diversity stemming from modality discrepancies in language descriptions, we devise bimodal learnable text tokens to capture modality-private semantic information for visible and infrared images, respectively. Additionally, acknowledging the complementary nature of semantic details across different modalities, we integrate text features from the bimodal language descriptions to achieve comprehensive semantics. Finally, we establish a connection between the integrated text features and the visual features across modalities. This process embed rich high-level semantic information into visual representations, thereby promoting the modality invariance of visual representations. The effectiveness and superiority of our proposed CSDN over existing methods have been substantiated through experimental evaluations on multiple widely used benchmarks.

源语言英语
页(从-至)4137-4150
页数14
期刊IEEE Transactions on Multimedia
27
DOI
出版状态已出版 - 2025
已对外发布

指纹

探究 'CLIP-Driven Semantic Discovery Network for Visible-Infrared Person Re-Identification' 的科研主题。它们共同构成独一无二的指纹。

引用此