MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation

Tianfei Zhou, Jianwu Li*, Shunzhou Wang, Ran Tao, Jianbing Shen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

165 Citations (Scopus)

Abstract

In this paper, we present a novel end-to-end learning neural network, i.e., MATNet, for zero-shot video object segmentation (ZVOS). Motivated by the human visual attention behavior, MATNet leverages motion cues as a bottom-up signal to guide the perception of object appearance. To achieve this, an asymmetric attention block, named Motion-Attentive Transition (MAT), is proposed within a two-stream encoder network to firstly identify moving regions and then attend appearance learning to capture the full extent of objects. Putting MATs in different convolutional layers, our encoder becomes deeply interleaved, allowing for close hierarchical interactions between object apperance and motion. Such a biologically-inspired design is proven to be superb to conventional two-stream structures, which treat motion and appearance independently in separate streams and often suffer severe overfitting to object appearance. Moreover, we introduce a bridge network to modulate multi-scale spatiotemporal features into more compact, discriminative and scale-sensitive representations, which are subsequently fed into a boundary-aware decoder network to produce accurate segmentation with crisp boundaries. We perform extensive quantitative and qualitative experiments on four challenging public benchmarks, i.e., DAVIS16, DAVIS17, FBMS and YouTube-Objects. Results show that our method achieves compelling performance against current state-of-the-art ZVOS methods. To further demonstrate the generalization ability of our spatiotemporal learning framework, we extend MATNet to another relevant task: dynamic visual attention prediction (DVAP). The experiments on two popular datasets (i.e., Hollywood-2 and UCF-Sports) further verify the superiority of our model (our code is available at https://github.com/tfzhou/MATNet).

Original languageEnglish
Article number9165947
Pages (from-to)8326-8338
Number of pages13
JournalIEEE Transactions on Image Processing
Volume29
DOIs
Publication statusPublished - 2020

Keywords

  • Video object segmentation
  • dynamic visual attention prediction
  • neural attention
  • spatiotemporal representation
  • twostream
  • zero-shot

Fingerprint

Dive into the research topics of 'MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation'. Together they form a unique fingerprint.

Cite this