Self-trained multi-cues model for video anomaly detection

Xusheng Wang, Zhengang Nie, Wei Liang, Mingtao Pei*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Video anomaly detection is an extremely challenging task in the field of intelligent surveillance analysis. In this paper, we propose a video anomaly detection method without any manual annotation information, which is the key limitations of existing weakly-supervised methods. Compared to existing single-clue unsupervised methods, we explore the importance of multiple cues and design a self-trained multi-cues model for video anomaly detection. In addition to appearance features, we find motion features and reconstruction error features are essential for detecting abnormal behaviors. Our method achieves the extraction and fusion of these features from video based on self-trained framework. Specifically, we use auto-encoders to generate reconstruction error maps of frames and optic flow maps respectively. Then we extract multiple cues features from frames/flow maps and the reconstruction error maps to detect abnormal events. As our model is self-trained, we do not need manually labeled training data. We conduct validation experiments on two public datasets. The experimental results show our self-trained multi-cues model outperforms existing unsupervised video anomaly detection methods and leads to good results compared with weakly-supervised methods.

Original languageEnglish
Pages (from-to)62333-62347
Number of pages15
JournalMultimedia Tools and Applications
Volume83
Issue number22
DOIs
Publication statusPublished - Jul 2024

Keywords

  • Anomaly detection
  • Unsupervised task
  • Video understanding

Fingerprint

Dive into the research topics of 'Self-trained multi-cues model for video anomaly detection'. Together they form a unique fingerprint.

Cite this