TY - GEN
T1 - CascadeNet
T2 - 24th International Conference on Pattern Recognition, ICPR 2018
AU - Li, Xiang
AU - Li, Wei
AU - Xu, Xiaodong
AU - Du, Qian
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/11/26
Y1 - 2018/11/26
N2 - Different enhanced convolutional neural network (CNN) architectures have been proposed to surpass very deep layer bottleneck by using shortcut connections. In this paper, we present an effective deep CNN architecture modified on the typical Residual Network (ResNet), named as Cascade Network (CascadeNet), by repeating cascade building blocks. Each cascade block contains independent convolution paths to pass information in the previous layer and the middle one. This strategy exposes a concept of 'cross-passing' which differs from the ResNet that stacks simple building blocks with residual connections. Traditional residual building block do not fully utilizes the middle layer information, but the designed cascade block catches cross-passing information for more complete features. There are several characteristics with CascadeNet: Enhance feature propagation and reuse feature after each layer instead of each block. In order to verify the performance in CascadeNet, the proposed architecture is evaluated in different ways on two data sets (i.e., CIFAR-10 and HistoPhenotypes dataset), showing better results than its ResNet counterpart.
AB - Different enhanced convolutional neural network (CNN) architectures have been proposed to surpass very deep layer bottleneck by using shortcut connections. In this paper, we present an effective deep CNN architecture modified on the typical Residual Network (ResNet), named as Cascade Network (CascadeNet), by repeating cascade building blocks. Each cascade block contains independent convolution paths to pass information in the previous layer and the middle one. This strategy exposes a concept of 'cross-passing' which differs from the ResNet that stacks simple building blocks with residual connections. Traditional residual building block do not fully utilizes the middle layer information, but the designed cascade block catches cross-passing information for more complete features. There are several characteristics with CascadeNet: Enhance feature propagation and reuse feature after each layer instead of each block. In order to verify the performance in CascadeNet, the proposed architecture is evaluated in different ways on two data sets (i.e., CIFAR-10 and HistoPhenotypes dataset), showing better results than its ResNet counterpart.
UR - http://www.scopus.com/inward/record.url?scp=85059736203&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2018.8545289
DO - 10.1109/ICPR.2018.8545289
M3 - Conference contribution
AN - SCOPUS:85059736203
T3 - Proceedings - International Conference on Pattern Recognition
SP - 483
EP - 488
BT - 2018 24th International Conference on Pattern Recognition, ICPR 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 20 August 2018 through 24 August 2018
ER -