TY - GEN
T1 - Learning Visual Prompt for Gait Recognition
AU - Ma, Kang
AU - Fu, Ying
AU - Cao, Chunshui
AU - Hou, Saihui
AU - Huang, Yongzhen
AU - Zheng, Dezhi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Gait, a prevalent and complex form of human motion, plays a significant role in the field of long-range pedestrian retrieval due to the unique characteristics inherent in individual motion patterns. However, gait recognition in real-world scenarios is challenging due to the limitations of capturing comprehensive cross-viewing and crossclothing data. Additionally, distractors such as occlusions, directional changes, and lingering movements further complicate the problem. The widespread application of deep learning techniques has led to the development of various potential gait recognition methods. However, these methods utilize convolutional networks to extract shared information across different views and attire conditions. Once trained, the parameters and non-linear function become constrained to fixed patterns, limiting their adaptability to various distractors in real-world scenarios. In this paper, we present a unified gait recognition framework to extract global motion patterns and develop a novel dynamic transformer to generate representative gait features. Specifically, we develop a trainable part-based prompt pool with numerous key-value pairs that can dynamically select prompt templates to incorporate into the gait sequence, thereby providing task-relevant shared knowledge information. Furthermore, we specifically design dynamic attention to extract robust motion patterns and address the length generalization issue. Extensive experiments on four widely recognized gait datasets, i.e., Gait3D, GREW, OUMVLP, and CASIA-B, reveal that the proposed method yields substantial improvements compared to current state-of-the-art approaches.
AB - Gait, a prevalent and complex form of human motion, plays a significant role in the field of long-range pedestrian retrieval due to the unique characteristics inherent in individual motion patterns. However, gait recognition in real-world scenarios is challenging due to the limitations of capturing comprehensive cross-viewing and crossclothing data. Additionally, distractors such as occlusions, directional changes, and lingering movements further complicate the problem. The widespread application of deep learning techniques has led to the development of various potential gait recognition methods. However, these methods utilize convolutional networks to extract shared information across different views and attire conditions. Once trained, the parameters and non-linear function become constrained to fixed patterns, limiting their adaptability to various distractors in real-world scenarios. In this paper, we present a unified gait recognition framework to extract global motion patterns and develop a novel dynamic transformer to generate representative gait features. Specifically, we develop a trainable part-based prompt pool with numerous key-value pairs that can dynamically select prompt templates to incorporate into the gait sequence, thereby providing task-relevant shared knowledge information. Furthermore, we specifically design dynamic attention to extract robust motion patterns and address the length generalization issue. Extensive experiments on four widely recognized gait datasets, i.e., Gait3D, GREW, OUMVLP, and CASIA-B, reveal that the proposed method yields substantial improvements compared to current state-of-the-art approaches.
UR - http://www.scopus.com/inward/record.url?scp=85207318745&partnerID=8YFLogxK
U2 - 10.1109/CVPR52733.2024.00063
DO - 10.1109/CVPR52733.2024.00063
M3 - Conference contribution
AN - SCOPUS:85207318745
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 593
EP - 603
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Y2 - 16 June 2024 through 22 June 2024
ER -