TY - GEN
T1 - General image segmentation by Deeper Residual U-Net
AU - Duan, Yuxin
AU - He, Siyuan
AU - Guo, Dong
AU - Jiang, Xuru
AU - Liu, Fengkui
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/4/12
Y1 - 2019/4/12
N2 - With the development of deep learning, using convolutional neural networks for semantic segmentation has received a large amount of attention. Numerous convolutional neural networks architecture has been proposed. In biomedical image processing, U-Net has achieved great remarkable achievement. However, due to the feeble convolution operations for extracting complex image information, the U-Net presents a poor performance in general semantic segmentation. Therefore, in this paper, we propose a new neural network framework, called 'Deeper Residual U-Net' for general image semantic segmentation. In our method, we apply ResNet101 for extracting features and use a double features fusion mechanism compared to U-net. In the first time, the Deeper Residual U-Net up sample each stage features and fuses them with features of the previous layer one by one, which make low-level features contain more abstract information. In the second time, it upsamples all fused features of different stages to the same size and combines them to predict. We test our network in Pascal VOC 2012 dataset and get mean accuracy 80.9, mIoU accuracy 74.3, which already available for general image segmentation.
AB - With the development of deep learning, using convolutional neural networks for semantic segmentation has received a large amount of attention. Numerous convolutional neural networks architecture has been proposed. In biomedical image processing, U-Net has achieved great remarkable achievement. However, due to the feeble convolution operations for extracting complex image information, the U-Net presents a poor performance in general semantic segmentation. Therefore, in this paper, we propose a new neural network framework, called 'Deeper Residual U-Net' for general image semantic segmentation. In our method, we apply ResNet101 for extracting features and use a double features fusion mechanism compared to U-net. In the first time, the Deeper Residual U-Net up sample each stage features and fuses them with features of the previous layer one by one, which make low-level features contain more abstract information. In the second time, it upsamples all fused features of different stages to the same size and combines them to predict. We test our network in Pascal VOC 2012 dataset and get mean accuracy 80.9, mIoU accuracy 74.3, which already available for general image segmentation.
KW - Feature fusion
KW - Semantic segmentation
KW - Skip connection
KW - U-Net
UR - http://www.scopus.com/inward/record.url?scp=85068874452&partnerID=8YFLogxK
U2 - 10.1145/3325730.3325739
DO - 10.1145/3325730.3325739
M3 - Conference contribution
AN - SCOPUS:85068874452
T3 - ACM International Conference Proceeding Series
SP - 123
EP - 127
BT - ICMAI 2019 - Proceedings of 2019 4th International Conference on Mathematics and Artificial Intelligence
PB - Association for Computing Machinery
T2 - 4th International Conference on Mathematics and Artificial Intelligence, ICMAI 2019
Y2 - 12 April 2019 through 15 April 2019
ER -