TY - GEN
T1 - Broad Learning System with Proportional-Integral-Differential Gradient Descent
AU - Zou, Weidong
AU - Xia, Yuanqing
AU - Cao, Weipeng
AU - Ming, Zhong
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Broad learning system (BLS) has attracted much attention in recent years due to its fast training speed and good generalization ability. Most of the existing BLS-based algorithms use the least square method to calculate its output weights. As the size of the training data set increases, this approach will cause the training efficiency of the model to be seriously reduced, and the solution of the model will also be unstable. To solve this problem, we have designed a new gradient descent method (GD) based on the proportional-integral-differential technique (PID) to replace the least square operation in the existing BLS algorithms, which is called PID-GD-BLS. Extensive experimental results on four benchmark data sets show that PID-GD can achieve faster convergence rate than traditional optimization algorithms such as Adam and AdaMod, and the generalization performance and stability of the PID-GD-BLS are much better than that of BLS and its variants. This study provides a new direction for BLS optimization and a better solution for BLS-based data mining.
AB - Broad learning system (BLS) has attracted much attention in recent years due to its fast training speed and good generalization ability. Most of the existing BLS-based algorithms use the least square method to calculate its output weights. As the size of the training data set increases, this approach will cause the training efficiency of the model to be seriously reduced, and the solution of the model will also be unstable. To solve this problem, we have designed a new gradient descent method (GD) based on the proportional-integral-differential technique (PID) to replace the least square operation in the existing BLS algorithms, which is called PID-GD-BLS. Extensive experimental results on four benchmark data sets show that PID-GD can achieve faster convergence rate than traditional optimization algorithms such as Adam and AdaMod, and the generalization performance and stability of the PID-GD-BLS are much better than that of BLS and its variants. This study provides a new direction for BLS optimization and a better solution for BLS-based data mining.
KW - Broad learning system
KW - Neural networks with random weights
KW - Optimization algorithms
KW - Proportional-integral-differential
KW - Randomized algorithms
UR - http://www.scopus.com/inward/record.url?scp=85092614386&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60245-1_15
DO - 10.1007/978-3-030-60245-1_15
M3 - Conference contribution
AN - SCOPUS:85092614386
SN - 9783030602444
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 219
EP - 231
BT - Algorithms and Architectures for Parallel Processing - 20th International Conference, ICA3PP 2020, Proceedings
A2 - Qiu, Meikang
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2020
Y2 - 2 October 2020 through 4 October 2020
ER -