Self-trained multi-cues model for video anomaly detection

Xusheng Wang, Zhengang Nie, Wei Liang, Mingtao Pei*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Video anomaly detection is an extremely challenging task in the field of intelligent surveillance analysis. In this paper, we propose a video anomaly detection method without any manual annotation information, which is the key limitations of existing weakly-supervised methods. Compared to existing single-clue unsupervised methods, we explore the importance of multiple cues and design a self-trained multi-cues model for video anomaly detection. In addition to appearance features, we find motion features and reconstruction error features are essential for detecting abnormal behaviors. Our method achieves the extraction and fusion of these features from video based on self-trained framework. Specifically, we use auto-encoders to generate reconstruction error maps of frames and optic flow maps respectively. Then we extract multiple cues features from frames/flow maps and the reconstruction error maps to detect abnormal events. As our model is self-trained, we do not need manually labeled training data. We conduct validation experiments on two public datasets. The experimental results show our self-trained multi-cues model outperforms existing unsupervised video anomaly detection methods and leads to good results compared with weakly-supervised methods.

源语言英语
页(从-至)62333-62347
页数15
期刊Multimedia Tools and Applications
83
22
DOI
出版状态已出版 - 7月 2024

指纹

探究 'Self-trained multi-cues model for video anomaly detection' 的科研主题。它们共同构成独一无二的指纹。

引用此