AA-RGTCN: reciprocal global temporal convolution network with adaptive alignment for video-based person re-identification

Yanjun Zhang, Yanru Lin, Xu Yang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Person re-identification(Re-ID) aims to retrieve pedestrians under different cameras. Compared with image-based Re-ID, video-based Re-ID extracts features from video sequences that contain both spatial features and temporal features. Existing methods usually focus on the most attractive image parts, and this will lead to redundant spatial description and insufficient temporal description. Other methods that take temporal clues into consideration usually ignore misalignment between frames and only focus on a fixed length of one given sequence. In this study, we proposed a Reciprocal Global Temporal Convolution Network with Adaptive Alignment(AA-RGTCN). The structure could address the drawback of misalignment between frames and model discriminative temporal representation. Specifically, the Adaptive Alignment block is designed to shift each frame adaptively to its best position for temporal modeling. Then, we proposed the Reciprocal Global Temporal Convolution Network to model robust temporal features across different time intervals along both normal and inverted time order. The experimental results show that our AA-RGTCN can achieve 85.9% mAP and 91.0% Rank-1 on MARS, 90.6% Rank-1 on iLIDS-VID, and 96.6% Rank-1 on PRID-2011, indicating we could gain better performance than other state-of-the-art approaches.

Original languageEnglish
Article number1329884
JournalFrontiers in Neuroscience
Volume18
DOIs
Publication statusPublished - 2024

Keywords

  • convolutional neural network
  • frame alignment
  • image recognition
  • temporal modeling
  • video person re-identification

Fingerprint

Dive into the research topics of 'AA-RGTCN: reciprocal global temporal convolution network with adaptive alignment for video-based person re-identification'. Together they form a unique fingerprint.

Cite this