Video object segmentation via adaptive threshold based on background model diversity

Boubekeur Mohamed Bachir*, Luo Senlin, Labidi Hocine, Benlefki Tarek

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background..

Original languageEnglish
Title of host publicationSixth International Conference on Graphic and Image Processing, ICGIP 2014
EditorsDavid Zhang, Yulin Wang, Xudong Jiang
PublisherSPIE
ISBN (Electronic)9781628415582
DOIs
Publication statusPublished - 2015
Event6th International Conference on Graphic and Image Processing, ICGIP 2014 - Beijing, China
Duration: 24 Oct 201426 Oct 2014

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume9443
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference6th International Conference on Graphic and Image Processing, ICGIP 2014
Country/TerritoryChina
CityBeijing
Period24/10/1426/10/14

Keywords

  • Background Subtraction
  • square successive differences
  • surveillance
  • video object segmentation

Fingerprint

Dive into the research topics of 'Video object segmentation via adaptive threshold based on background model diversity'. Together they form a unique fingerprint.

Cite this