TY - JOUR
T1 - SAFA
T2 - Lifelong Person Re-Identification learning by statistics-aware feature alignment
AU - Gao, Qiankun
AU - Jia, Mengxi
AU - Chen, Jie
AU - Zhang, Jian
N1 - Publisher Copyright:
© 2024
PY - 2025/3
Y1 - 2025/3
N2 - The goal of Lifelong Person Re-Identification (Re-ID) is to continuously update a model with new data to improve its generalization ability, without forgetting previously learned knowledge. Lifelong Re-ID approaches usually employs classifier-based knowledge distillation to overcome forgetting, where classifier parameters grow with the amount of learning data. In the fine-grained Re-ID task, features contain more valuable information than classifiers. However, due to feature space drift, naive feature distillation can overly suppress model's plasticity. This paper proposes SAFA with statistics-aware feature alignment and progressive feature distillation. Specifically, we align new and old features based on coefficient of variation and gradually increase the strength of feature distillation. This encourages the model to learn new knowledge in early epochs, punishes it for forgetting in later epochs, and ultimately achieves a better stability–plasticity balance. Experiments on domain-incremental and intra-domain benchmarks demonstrate that our SAFA significantly outperforms counterparts while achieving better memory and computation efficiency.
AB - The goal of Lifelong Person Re-Identification (Re-ID) is to continuously update a model with new data to improve its generalization ability, without forgetting previously learned knowledge. Lifelong Re-ID approaches usually employs classifier-based knowledge distillation to overcome forgetting, where classifier parameters grow with the amount of learning data. In the fine-grained Re-ID task, features contain more valuable information than classifiers. However, due to feature space drift, naive feature distillation can overly suppress model's plasticity. This paper proposes SAFA with statistics-aware feature alignment and progressive feature distillation. Specifically, we align new and old features based on coefficient of variation and gradually increase the strength of feature distillation. This encourages the model to learn new knowledge in early epochs, punishes it for forgetting in later epochs, and ultimately achieves a better stability–plasticity balance. Experiments on domain-incremental and intra-domain benchmarks demonstrate that our SAFA significantly outperforms counterparts while achieving better memory and computation efficiency.
KW - Feature space drift
KW - Lifelong Learning
KW - Person Re-Identification
UR - http://www.scopus.com/inward/record.url?scp=85214345246&partnerID=8YFLogxK
U2 - 10.1016/j.jvcir.2024.104378
DO - 10.1016/j.jvcir.2024.104378
M3 - Article
AN - SCOPUS:85214345246
SN - 1047-3203
VL - 107
JO - Journal of Visual Communication and Image Representation
JF - Journal of Visual Communication and Image Representation
M1 - 104378
ER -