Video object segmentation aggregation

Tianfei Zhou, Yao Lu, Huijun Di, Jian Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Citations (Scopus)

Abstract

We present an approach for unsupervised object segmentation in unconstrained videos. Driven by the latest progress in this field, we argue that segmentation performance can be largely improved by aggregating the results generated by state-of-the-art algorithms. Initially, objects in individual frames are estimated through a per-frame aggregation procedure using majority voting. While this can predict relatively accurate object location, the initial estimation fails to cover the parts that are wrongly labeled by more than half of the algorithms. To address this, we build a holistic appearance model using non-local appearance cues by linear regression. Then, we integrate the appearance priors and spatio-temporal information into an energy minimization framework to refine the initial estimation. We evaluate our method on challenging benchmark videos and demonstrate that it outperforms state-of-the-art algorithms.

Original languageEnglish
Title of host publication2016 IEEE International Conference on Multimedia and Expo, ICME 2016
PublisherIEEE Computer Society
ISBN (Electronic)9781467372589
DOIs
Publication statusPublished - 25 Aug 2016
Event2016 IEEE International Conference on Multimedia and Expo, ICME 2016 - Seattle, United States
Duration: 11 Jul 201615 Jul 2016

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
Volume2016-August
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2016 IEEE International Conference on Multimedia and Expo, ICME 2016
Country/TerritoryUnited States
CitySeattle
Period11/07/1615/07/16

Keywords

  • Video object segmentation
  • appearance model
  • data fusion

Fingerprint

Dive into the research topics of 'Video object segmentation aggregation'. Together they form a unique fingerprint.

Cite this