Dangerous object recognition for visual surveillance

Peng Yao*, Yongtian Wang, Can Chen, Dongdong Weng, Yue Liu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

In this paper, we address a critical task, i.e. dangerous object recognition for web surveillance system. Instead of investigating how to define an object is dangerous by analyzing human's activities (e.g. leaving a bomb in public places), our research focus on how to capture the dangerous object immediately when he/she appears under surveillance camera again, to stop him/her do more bad things. Different with the existing template and feature matching based methods, we solve the dangerous object recognition problem by a classification based method. We train a SVM classifier by learning "bag of words" based dangerous and non-dangerous object representation. For obtaining more discriminative object descriptors, we fuse color and texture, two low level image features, to generate descriptors under "bag of words" frame. We evaluate the proposed method, along with template and feature matching methods. The experimental results validate our method.

Original languageEnglish
Title of host publicationICALIP 2012 - 2012 International Conference on Audio, Language and Image Processing, Proceedings
Pages55-61
Number of pages7
DOIs
Publication statusPublished - 2012
Event2012 3rd IEEE/IET International Conference on Audio, Language and Image Processing, ICALIP 2012 - Shanghai, China
Duration: 16 Jul 201218 Jul 2012

Publication series

NameICALIP 2012 - 2012 International Conference on Audio, Language and Image Processing, Proceedings

Conference

Conference2012 3rd IEEE/IET International Conference on Audio, Language and Image Processing, ICALIP 2012
Country/TerritoryChina
CityShanghai
Period16/07/1218/07/12

Fingerprint

Dive into the research topics of 'Dangerous object recognition for visual surveillance'. Together they form a unique fingerprint.

Cite this