TY - JOUR
T1 - Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network
AU - Han, Luyi
AU - Huang, Yunzhi
AU - Dou, Haoran
AU - Wang, Shuai
AU - Ahamad, Sahar
AU - Luo, Honghao
AU - Liu, Qi
AU - Fan, Jingfan
AU - Zhang, Jiang
N1 - Publisher Copyright:
© 2019
PY - 2020/6
Y1 - 2020/6
N2 - Background and objective: Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN). Methods: In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network—BUS-S and an evaluation base network—BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth. Results: The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods. Conclusions: The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians’ experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.
AB - Background and objective: Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN). Methods: In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network—BUS-S and an evaluation base network—BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth. Results: The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods. Conclusions: The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians’ experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.
KW - Attention mechanism
KW - Breast lesion
KW - Generative adversarial networks
KW - Image segmentation
KW - Semi-supervised learning
KW - Ultrasound image
UR - http://www.scopus.com/inward/record.url?scp=85078510595&partnerID=8YFLogxK
U2 - 10.1016/j.cmpb.2019.105275
DO - 10.1016/j.cmpb.2019.105275
M3 - Article
C2 - 31978805
AN - SCOPUS:85078510595
SN - 0169-2607
VL - 189
JO - Computer Methods and Programs in Biomedicine
JF - Computer Methods and Programs in Biomedicine
M1 - 105275
ER -