Multi-view fast object detection by using extended haar filters in uncontrolled environments

Sadi Vural*, Yasushi Mae, Huseyin Uvet, Tatsuo Arai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

In this paper, we propose multi-view object detection methodology by using specific extended class of haar-like filters, which apparently detects the object with high accuracy in the unconstraint environments. There are several object detection techniques, which work well in restricted environments, where illumination is constant and the view angle of the object is restricted. The proposed object detection methodology successfully detects faces, cars, logo objects at any size and pose with high accuracy in real world conditions. To cope with angle variation, we propose a multiple trained cascades by using the proposed filters, which performs even better detection by spanning a different range of orientation in each cascade. We tested the proposed approach by still images by using image databases and conducted some evaluations by using video images from an IP camera placed in outdoor. We tested the method for detecting face, logo, and vehicle in different environments. The experimental results show that the proposed method yields higher classification performance than Viola and Jones's detector, which uses a single feature for each weak classifier. Given the less number of features, our detector detects any face, object, or vehicle in 15 fps when using 4 megapixel images with 95% accuracy on an Intel i7 2.8 GHz machine.

Original languageEnglish
Pages (from-to)126-133
Number of pages8
JournalPattern Recognition Letters
Volume33
Issue number2
DOIs
Publication statusPublished - 15 Jan 2012
Externally publishedYes

Keywords

  • Face detection
  • Haar filters
  • Multi-view
  • Object detection
  • Restricted environments

Fingerprint

Dive into the research topics of 'Multi-view fast object detection by using extended haar filters in uncontrolled environments'. Together they form a unique fingerprint.

Cite this