Enhancing Person Re-Identification Performance Through in Vivo Learning

Yan Huang*, Liang Wang*, Zhang Zhang, Qiang Wu, Yi Zhong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

This research investigates the potential of in vivo learning to enhance visual representation learning for image-based person re-identification (re-ID). Compared to traditional self-supervised learning (which require external data), the introduced in vivo learning utilizes supervisory labels generated from pedestrian images to improve re-ID accuracy without relying on external data sources. Three carefully designed in vivo learning tasks, leveraging statistical regularities within images, are proposed without the need for laborious manual annotations. These tasks enable feature extractors to learn more comprehensive and discriminative person representations by jointly modeling various aspects of human biological structure information, contributing to enhanced re-ID performance. Notably, the method seamlessly integrates with existing re-ID frameworks, requiring minimal modifications and no additional data beyond the existing training set. Extensive experiments on diverse datasets, including Market1501, CUHK03-NP, Celeb-reID, Celeb-reid-light, PRCC, and LTCC, demonstrate substantial enhancements in rank-1 precision compared to state-of-the-art methods.

Original languageEnglish
Pages (from-to)639-654
Number of pages16
JournalIEEE Transactions on Image Processing
Volume33
DOIs
Publication statusPublished - 2024

Keywords

  • Person re-identification
  • boosting performance
  • in vivo learning

Fingerprint

Dive into the research topics of 'Enhancing Person Re-Identification Performance Through in Vivo Learning'. Together they form a unique fingerprint.

Cite this

Huang, Y., Wang, L., Zhang, Z., Wu, Q., & Zhong, Y. (2024). Enhancing Person Re-Identification Performance Through in Vivo Learning. IEEE Transactions on Image Processing, 33, 639-654. https://doi.org/10.1109/TIP.2023.3341762