TY - GEN
T1 - AONet
T2 - 16th Asian Conference on Computer Vision, ACCV 2022
AU - Gao, Guangyu
AU - Wang, Qianxiang
AU - Ge, Jing
AU - Zhang, Yan
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Occluded person Re-identification (Occluded ReID) aims to verify the identity of a pedestrian with occlusion across non-overlapping cameras. Previous works for this task often rely on external tasks, e.g., pose estimation, or semantic segmentation, to extract local features over fixed given regions. However, these external models may perform poorly on Occluded ReID, since they are still open problems with no reliable performance guarantee and are not oriented towards ReID tasks to provide discriminative local features. In this paper, we propose an Attentional Occlusion-aware Network (AONet) for Occluded ReID that does not rely on any external tasks. AONet adaptively learns discriminative local features over latent landmark regions by the trainable pattern vectors, and softly weights the summation of landmark-wise similarities based on the occlusion awareness. Also, as there are no ground truth occlusion annotations, we measure the occlusion of landmarks by the awareness scores, when referring to a memorized dictionary storing average landmark features. These awareness scores are then used as a soft weight for training and inferring. Meanwhile, the memorized dictionary is momenta updated according to the landmark features and the awareness scores of each input image. The AONet achieves 53.1 % mAP and 66.5 % Rank1 on the Occluded-DukeMTMC, significantly outperforming state-of-the-arts without any bells and whistles, and also shows obvious improvements on the holistic datasets Market-1501 and DukeMTMC-reID, as well as the partial datasets Partial-REID and Partial-iLIDS. The code and pre-trained models will be released online soon.
AB - Occluded person Re-identification (Occluded ReID) aims to verify the identity of a pedestrian with occlusion across non-overlapping cameras. Previous works for this task often rely on external tasks, e.g., pose estimation, or semantic segmentation, to extract local features over fixed given regions. However, these external models may perform poorly on Occluded ReID, since they are still open problems with no reliable performance guarantee and are not oriented towards ReID tasks to provide discriminative local features. In this paper, we propose an Attentional Occlusion-aware Network (AONet) for Occluded ReID that does not rely on any external tasks. AONet adaptively learns discriminative local features over latent landmark regions by the trainable pattern vectors, and softly weights the summation of landmark-wise similarities based on the occlusion awareness. Also, as there are no ground truth occlusion annotations, we measure the occlusion of landmarks by the awareness scores, when referring to a memorized dictionary storing average landmark features. These awareness scores are then used as a soft weight for training and inferring. Meanwhile, the memorized dictionary is momenta updated according to the landmark features and the awareness scores of each input image. The AONet achieves 53.1 % mAP and 66.5 % Rank1 on the Occluded-DukeMTMC, significantly outperforming state-of-the-arts without any bells and whistles, and also shows obvious improvements on the holistic datasets Market-1501 and DukeMTMC-reID, as well as the partial datasets Partial-REID and Partial-iLIDS. The code and pre-trained models will be released online soon.
KW - Landmark
KW - Occluded ReID
KW - Occlusion-aware
KW - Orthogonal
UR - http://www.scopus.com/inward/record.url?scp=85151057220&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-26348-4_2
DO - 10.1007/978-3-031-26348-4_2
M3 - Conference contribution
AN - SCOPUS:85151057220
SN - 9783031263477
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 21
EP - 36
BT - Computer Vision – ACCV 2022 - 16th Asian Conference on Computer Vision, Proceedings
A2 - Wang, Lei
A2 - Gall, Juergen
A2 - Chin, Tat-Jun
A2 - Sato, Imari
A2 - Chellappa, Rama
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 4 December 2022 through 8 December 2022
ER -