TY - GEN
T1 - Dangerous object recognition for visual surveillance
AU - Yao, Peng
AU - Wang, Yongtian
AU - Chen, Can
AU - Weng, Dongdong
AU - Liu, Yue
PY - 2012
Y1 - 2012
N2 - In this paper, we address a critical task, i.e. dangerous object recognition for web surveillance system. Instead of investigating how to define an object is dangerous by analyzing human's activities (e.g. leaving a bomb in public places), our research focus on how to capture the dangerous object immediately when he/she appears under surveillance camera again, to stop him/her do more bad things. Different with the existing template and feature matching based methods, we solve the dangerous object recognition problem by a classification based method. We train a SVM classifier by learning "bag of words" based dangerous and non-dangerous object representation. For obtaining more discriminative object descriptors, we fuse color and texture, two low level image features, to generate descriptors under "bag of words" frame. We evaluate the proposed method, along with template and feature matching methods. The experimental results validate our method.
AB - In this paper, we address a critical task, i.e. dangerous object recognition for web surveillance system. Instead of investigating how to define an object is dangerous by analyzing human's activities (e.g. leaving a bomb in public places), our research focus on how to capture the dangerous object immediately when he/she appears under surveillance camera again, to stop him/her do more bad things. Different with the existing template and feature matching based methods, we solve the dangerous object recognition problem by a classification based method. We train a SVM classifier by learning "bag of words" based dangerous and non-dangerous object representation. For obtaining more discriminative object descriptors, we fuse color and texture, two low level image features, to generate descriptors under "bag of words" frame. We evaluate the proposed method, along with template and feature matching methods. The experimental results validate our method.
UR - https://www.scopus.com/pages/publications/84872158357
U2 - 10.1109/ICALIP.2012.6376587
DO - 10.1109/ICALIP.2012.6376587
M3 - Conference contribution
AN - SCOPUS:84872158357
SN - 9781467301718
T3 - ICALIP 2012 - 2012 International Conference on Audio, Language and Image Processing, Proceedings
SP - 55
EP - 61
BT - ICALIP 2012 - 2012 International Conference on Audio, Language and Image Processing, Proceedings
T2 - 2012 3rd IEEE/IET International Conference on Audio, Language and Image Processing, ICALIP 2012
Y2 - 16 July 2012 through 18 July 2012
ER -