Refined video segmentation through global appearance regression

Lin Zhang, Yao Lu*, Lihua Lu, Tianfei Zhou

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

8 引用 (Scopus)

摘要

To achieve accurate segmentation in unconstrained videos, we propose a novel segmentation framework based on a two-stream deep convolution network. Our algorithm exploits the object's robust pixel-level features within all the video frames and generates foreground likelihood maps with sufficient details. At first, a two-stream video segmentation network using multiple hierarchical features is designed to generate initial segmentation masks. Then, all initial segmentation masks and original corresponding images are collected to learn an appearance model by the least-square regression method. The model computes appearance likelihood map for every image. Finally, pairs of initial segmentation masks and appearance likelihood maps are fused by a proposed fusion network to generate final high-quality segmentation maps. Experiments on the challenging dataset DAVIS verify the effectiveness of our appearance regression and demonstrate that our proposed algorithm outperforms the state-of-the-art algorithms.

源语言英语
页(从-至)59-67
页数9
期刊Neurocomputing
334
DOI
出版状态已出版 - 21 3月 2019
已对外发布

指纹

探究 'Refined video segmentation through global appearance regression' 的科研主题。它们共同构成独一无二的指纹。

引用此