TY - GEN
T1 - Completing Saliency from Details
AU - Zhang, Jin
AU - Liu, Yumeng
AU - Wu, Lingxiang
AU - Dian, Renwei
AU - Yao, Yiheng
AU - Huang, Shihao
AU - Yang, Yang
AU - Zhang, Ruiheng
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - The salient object detection (SOD) models based on the UNet or FCN structure have reached a significant milestone, and the addition of edge constraints to the SOD model has progressively become a common practice in current methods. Despite these methods producing excellent results, they still lack sufficient confidence in places with sharp edges of the objects owing to sample imbalance. In addition, compressing the encoded features to lower dimensions to decrease the computational cost, as a commonly used method, would unavoidably diminish the model’s precision. To overcome the aforementioned issues, we propose a feature mutual feedback network (FMFNet) for the SOD task in which the semantic supplement module (SSM) integrates diverse feature information through different receptive fields to preserve important features. In addition, we provide a novel details map, which can better serve as an edge map to aid the model in learning the hard edge regions, resulting in more complete saliency maps. Multiple experiments on five benchmark datasets indicate the effectiveness, robustness, and superiority of the proposed model and details map.
AB - The salient object detection (SOD) models based on the UNet or FCN structure have reached a significant milestone, and the addition of edge constraints to the SOD model has progressively become a common practice in current methods. Despite these methods producing excellent results, they still lack sufficient confidence in places with sharp edges of the objects owing to sample imbalance. In addition, compressing the encoded features to lower dimensions to decrease the computational cost, as a commonly used method, would unavoidably diminish the model’s precision. To overcome the aforementioned issues, we propose a feature mutual feedback network (FMFNet) for the SOD task in which the semantic supplement module (SSM) integrates diverse feature information through different receptive fields to preserve important features. In addition, we provide a novel details map, which can better serve as an edge map to aid the model in learning the hard edge regions, resulting in more complete saliency maps. Multiple experiments on five benchmark datasets indicate the effectiveness, robustness, and superiority of the proposed model and details map.
KW - Details map
KW - Edge supervision
KW - Salient object detection
UR - http://www.scopus.com/inward/record.url?scp=85210018487&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-8493-6_11
DO - 10.1007/978-981-97-8493-6_11
M3 - Conference contribution
AN - SCOPUS:85210018487
SN - 9789819784929
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 151
EP - 164
BT - Pattern Recognition and Computer Vision - 7th Chinese Conference, PRCV 2024, Proceedings
A2 - Lin, Zhouchen
A2 - Zha, Hongbin
A2 - Cheng, Ming-Ming
A2 - He, Ran
A2 - Liu, Cheng-Lin
A2 - Ubul, Kurban
A2 - Silamu, Wushouer
A2 - Zhou, Jie
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2024
Y2 - 18 October 2024 through 20 October 2024
ER -