Structural Transformer with Region Strip Attention for Video Object Segmentation

Qingfeng Guan, Hao Fang, Chenchen Han, Zhicheng Wang, Ruiheng Zhang, Yitian Zhang*, Xiankai Lu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Memory-based methods in semi-supervised video object segmentation (VOS) achieve competitive performance by performing feature similarity between the current frame and memory frames. However, this operation involves two challenges: 1) instances of occlusion caused by interaction, and 2) interference from similar objects or clutters in the background. In this work, we propose a Structural Transformer with Region Strip Attention (STRSA) approach to address these challenges. Specifically, we build a Structural Transformer (ST) architecture to decompose the feature similarity between the long-term memory frames and the current frame into two aspects: a time–space part and an object significance part. This allows us to investigate the spatio-temporal relationship of pixels and capture the salient features of the objects. Therefore, the differences between pixels and the specificity of objects are fully exploited. In addition, we leverage the object location information from the long-term memory masks and the strip pooling to design a Region Strip Attention (RSA) module, which boosts the attention for the foreground regions and suppresses the background clutters. Extensive experiments on DAVIS, YouTube-VOS, and MOSE benchmarks prove that our method achieves satisfactory results and outperforms the retrained baseline model.

Original languageEnglish
Article number128076
JournalNeurocomputing
Volume596
DOIs
Publication statusPublished - 1 Sept 2024

Keywords

  • Region Strip Attention
  • Structural Transformer
  • Video Object Segmentation

Fingerprint

Dive into the research topics of 'Structural Transformer with Region Strip Attention for Video Object Segmentation'. Together they form a unique fingerprint.

Cite this