Abstract
Curvilinear Structure Segmentation (CSS) aims to predict the binary masks of curvilinear objects in photographs, such as blood vessels and road cracks. Despite impressive segmentation results, current CSS methods often struggle with disconnected segmentations and inaccurate edges, essentially caused by the distractions in the feature domain, which are from image domain-specific challenges, and by the imbalance of scales of thicker and thinner branches. To address these problems, we propose Distraction Mining and Dual Mutual Network (DMDMN) based on reverse attention and mutual learning to improve CSS. Specifically, DMDMN first extracts various levels of features from an input image using a plain network. Then, a Reverse Attention Module (RAM) is designed at each level to enhance the extracted features by identifying and removing false positive and negative distractions. Next is a Three-Head Fusion Module (THFM) at a separate level, which serves as an exchanger to mutually integrate the features from the branch segmentation head with those from the heads of skeleton extraction and edge detection. With the segmented results flowing back to RAMs, the previous two steps alternate several times till the final prediction of a high-quality segmentation, which is topologically connective along branches and pixel-wise accurate at edges. In addition, extensive experiments on five public datasets have demonstrated the superiority of the proposed DMDMN over state-of-the-art approaches both qualitatively and quantitatively.
| Original language | English |
|---|---|
| Article number | 817 |
| Journal | Applied Intelligence |
| Volume | 55 |
| Issue number | 11 |
| DOIs | |
| Publication status | Published - Jul 2025 |
| Externally published | Yes |
Keywords
- Curvilinear structure
- Multi-task learning
- Reverse attention
- Semantic segmentation
Fingerprint
Dive into the research topics of 'DMDMN: distraction mining and dual mutual network for curvilinear structure segmentation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver