CoGANet: Co-Guided Attention Network for Salient Object Detection

Yufei Zhao, Yong Song*, Guoqi Li*, Yi Huang, Yashuo Bai, Ya Zhou, Qun Hao

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Recent salient object detection methods are mainly based on Convolutional Neural Networks (CNNs). Most of them adopt a U-shape architecture to extract and fuse multi-scale features. The coarser-level semantic information is progressively transmitted to finer-level layers through continuous upsampling operations, and the coarsest-level features will be diluted, resulting in the salient object boundary being blurred. On the other hand, the hand-craft feature has the advantage of being purposive and easy to calculate, in which the edge density feature may help improve the sharpness of the salient object boundary by the rich edge information. In this paper, we propose a Co-Guided Attention Network (CoGANet). On the base of the Feature Pyramid Network (FPN), our model implements a co-guided attention mechanism between the image itself and its edge density feature. In the bottom-up pathway of FPN, two streams separately work, taking the original image and its edge density feature as inputs, and each producing five feature maps. Then the last feature map in each stream generates a set of attention maps through a Multi-scale Spatial Attention Module (MSAM). In the top-down pathway, the attention maps of one stream are directly delivered to each stage in the other stream. These attention maps are fused with the feature maps by an Attention-based Feature Fusion Module (AFFM). Finally, an accurate saliency map is produced by fusing the finest-level outputs of the two streams. Experimental results on five benchmark datasets demonstrate our model is superior to 13 state-of-the-art methods in terms of four evaluation metrics.

源语言英语
文章编号7842812
期刊IEEE Photonics Journal
14
4
DOI
出版状态已出版 - 1 8月 2022

指纹

探究 'CoGANet: Co-Guided Attention Network for Salient Object Detection' 的科研主题。它们共同构成独一无二的指纹。

引用此