TY - GEN
T1 - CpdsConv
T2 - 10th Symposium on Novel Optoelectronic Detection Technology and Applications
AU - Ding, Yan
AU - Xiao, Jianbang
AU - Li, Haitan
AU - Zhang, Bozhi
AU - Song, Ping
AU - Cen, Yetao
AU - Fan, Yixiao
N1 - Publisher Copyright:
© 2025 SPIE.
PY - 2025
Y1 - 2025
N2 - Current structured pruning methods typically employ fixed thresholds and rank channels based on evaluation metrics for pruning, leading to significant drops in network accuracy post-pruning. This paper proposes a novel continuous pruning method for depthwise separable convolution, named CpdsConv, to address this issue. CpdsConv uses a two-stage training approach combining depthwise separable convolutions (TSConv) and learnable threshold sparse convolutions (LTSConv) to prune the network, aiming to enhance accuracy while compressing the model. To validate the effectiveness of the proposed method, we conducted experiments on the VGG-16, MobileNet, GoogLeNet, and ResNet-56 models. The results demonstrated that, while maintaining the original accuracy, the proposed method significantly reduced the number of parameters and computational burden in these models, showing a better balance overall. Notably, in the ResNet-56 experiment, the parameters were reduced by 37.9%, with a simultaneous accuracy improvement of 0.81%. This indicates that our method not only effectively reduces computational complexity but also significantly enhances model performance.
AB - Current structured pruning methods typically employ fixed thresholds and rank channels based on evaluation metrics for pruning, leading to significant drops in network accuracy post-pruning. This paper proposes a novel continuous pruning method for depthwise separable convolution, named CpdsConv, to address this issue. CpdsConv uses a two-stage training approach combining depthwise separable convolutions (TSConv) and learnable threshold sparse convolutions (LTSConv) to prune the network, aiming to enhance accuracy while compressing the model. To validate the effectiveness of the proposed method, we conducted experiments on the VGG-16, MobileNet, GoogLeNet, and ResNet-56 models. The results demonstrated that, while maintaining the original accuracy, the proposed method significantly reduced the number of parameters and computational burden in these models, showing a better balance overall. Notably, in the ResNet-56 experiment, the parameters were reduced by 37.9%, with a simultaneous accuracy improvement of 0.81%. This indicates that our method not only effectively reduces computational complexity but also significantly enhances model performance.
KW - Classification
KW - Convolutional Neural Networks
KW - Evaluation Metrics
KW - Structured Pruning
UR - http://www.scopus.com/inward/record.url?scp=85219318929&partnerID=8YFLogxK
U2 - 10.1117/12.3055167
DO - 10.1117/12.3055167
M3 - Conference contribution
AN - SCOPUS:85219318929
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Tenth Symposium on Novel Optoelectronic Detection Technology and Applications
A2 - Ping, Chen
PB - SPIE
Y2 - 1 November 2024 through 3 November 2024
ER -