Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network

Luyi Han, Yunzhi Huang*, Haoran Dou, Shuai Wang, Sahar Ahamad, Honghao Luo, Qi Liu, Jingfan Fan, Jiang Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

75 Citations (Scopus)

Abstract

Background and objective: Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN). Methods: In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network—BUS-S and an evaluation base network—BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth. Results: The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods. Conclusions: The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians’ experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.

Original languageEnglish
Article number105275
JournalComputer Methods and Programs in Biomedicine
Volume189
DOIs
Publication statusPublished - Jun 2020

Keywords

  • Attention mechanism
  • Breast lesion
  • Generative adversarial networks
  • Image segmentation
  • Semi-supervised learning
  • Ultrasound image

Fingerprint

Dive into the research topics of 'Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network'. Together they form a unique fingerprint.

Cite this