TY - JOUR
T1 - RB-Net
T2 - Training Highly Accurate and Efficient Binary Neural Networks with Reshaped Point-Wise Convolution and Balanced Activation
AU - Liu, Chunlei
AU - Ding, Wenrui
AU - Chen, Peng
AU - Zhuang, Bohan
AU - Wang, Yufeng
AU - Zhao, Yang
AU - Zhang, Baochang
AU - Han, Yuqi
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2022/9/1
Y1 - 2022/9/1
N2 - In this paper, we find that the conventional convolution operation becomes the bottleneck for extremely efficient binary neural networks (BNNs). To address this issue, we open up a new direction by introducing a reshaped point-wise convolution (RPC) to replace the conventional one to build BNNs. Specifically, we conduct a point-wise convolution after rearranging the spatial information into depth, with which at least 2.25 × computation reduction can be achieved. Such an efficient RPC allows us to explore more powerful representational capacity of BNNs under a given computation complexity budget. Moreover, we propose to use a balanced activation (BA) to adjust the distribution of the scaled activations after binarization, which enables significant performance improvement of BNNs. After integrating RPC and BA, the proposed network, dubbed as RB-Net, strikes a good trade-off between accuracy and efficiency, achieving superior performance with lower computational cost against the state-of-The-Art BNN methods. Specifically, our RB-Net achieves 66.8% Top-1 accuracy with ResNet-18 backbone on ImageNet, exceeding the state-of-The-Art Real-To-Binary Net (65.4%) by 1.4% while achieving more than 3 × reduction (52M vs. 165M) in computational complexity.
AB - In this paper, we find that the conventional convolution operation becomes the bottleneck for extremely efficient binary neural networks (BNNs). To address this issue, we open up a new direction by introducing a reshaped point-wise convolution (RPC) to replace the conventional one to build BNNs. Specifically, we conduct a point-wise convolution after rearranging the spatial information into depth, with which at least 2.25 × computation reduction can be achieved. Such an efficient RPC allows us to explore more powerful representational capacity of BNNs under a given computation complexity budget. Moreover, we propose to use a balanced activation (BA) to adjust the distribution of the scaled activations after binarization, which enables significant performance improvement of BNNs. After integrating RPC and BA, the proposed network, dubbed as RB-Net, strikes a good trade-off between accuracy and efficiency, achieving superior performance with lower computational cost against the state-of-The-Art BNN methods. Specifically, our RB-Net achieves 66.8% Top-1 accuracy with ResNet-18 backbone on ImageNet, exceeding the state-of-The-Art Real-To-Binary Net (65.4%) by 1.4% while achieving more than 3 × reduction (52M vs. 165M) in computational complexity.
KW - balanced activation
KW - Binary neural network
KW - object classification
KW - reshaped point-wise convolution
UR - http://www.scopus.com/inward/record.url?scp=85128644480&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2022.3166803
DO - 10.1109/TCSVT.2022.3166803
M3 - Article
AN - SCOPUS:85128644480
SN - 1051-8215
VL - 32
SP - 6414
EP - 6424
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 9
ER -