Learning to generate video object segment proposals

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

This paper proposes a fully automatic pipeline to generate accurate object segment proposals in realistic videos. Our approach first detects generic object proposals for all video frames and then learns to rank them using a Convolutional Neural Networks (CNN) descriptor built on appearance and motion cues. The ambiguity of the proposal set can be reduced while the quality can be retained as highly as possible Next, high-scoring proposals are greedily tracked over the entire sequence into distinct tracklets. Observing that the proposal tracklet set at this stage is noisy and redundant, we perform a tracklet selection scheme to suppress the highly overlapped tracklets, and detect occlusions based on appearance and location information. Finally, we exploit holistic appearance cues for refinement of video segment proposals to obtain pixel-accurate segmentation. Our method is evaluated on two video segmentation datasets i.e. SegTrack v1 and FBMS-59 and achieves competitive results in comparison with other state-of-the-art methods.

Original languageEnglish
Title of host publication2017 IEEE International Conference on Multimedia and Expo, ICME 2017
PublisherIEEE Computer Society
Pages787-792
Number of pages6
ISBN (Electronic)9781509060672
DOIs
Publication statusPublished - 28 Aug 2017
Event2017 IEEE International Conference on Multimedia and Expo, ICME 2017 - Hong Kong, Hong Kong
Duration: 10 Jul 201714 Jul 2017

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2017 IEEE International Conference on Multimedia and Expo, ICME 2017
Country/TerritoryHong Kong
CityHong Kong
Period10/07/1714/07/17

Fingerprint

Dive into the research topics of 'Learning to generate video object segment proposals'. Together they form a unique fingerprint.

Cite this