TY - GEN
T1 - An Incremental Extreme Learning Machine Prediction Method Based on Attenuated Regularization Term
AU - Wang, Can
AU - Li, Yuxiang
AU - Zou, Weidong
AU - Xia, Yuanqing
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - As a powerful tool for regression prediction, Incremental Extreme Learning Machine (I-ELM) has good nonlinear approximation ability, but the original model has the problem that the uneven output weights distribution affects the generalization ability of the model. This paper proposes an Incremental Extreme Learning Machine method based on Attenuated Regularization Term (ARI-ELM). The proposed ARI-ELM adds attenuation regularization term in the iterative process of output weights, reduces the output weights of the hidden node in the early stage of the iteration and ensuring that the new nodes after multiple iterations are not affected by the large regularization coefficient. Therefore, the overall output weights of the network reach a relatively small and evenly distributed state, which would reduce the complexity of the model. This paper also proves that the model still has convergence performance after adding the attenuated regularization term. Simulation results on the benchmark data set demonstrate that our proposed approach has better generalization performance than other incremental extreme learning machine variants. In addition, this paper applies the algorithm to specific weight prediction scene of intelligent manufacturing dynamic scheduling, and also gets good results.
AB - As a powerful tool for regression prediction, Incremental Extreme Learning Machine (I-ELM) has good nonlinear approximation ability, but the original model has the problem that the uneven output weights distribution affects the generalization ability of the model. This paper proposes an Incremental Extreme Learning Machine method based on Attenuated Regularization Term (ARI-ELM). The proposed ARI-ELM adds attenuation regularization term in the iterative process of output weights, reduces the output weights of the hidden node in the early stage of the iteration and ensuring that the new nodes after multiple iterations are not affected by the large regularization coefficient. Therefore, the overall output weights of the network reach a relatively small and evenly distributed state, which would reduce the complexity of the model. This paper also proves that the model still has convergence performance after adding the attenuated regularization term. Simulation results on the benchmark data set demonstrate that our proposed approach has better generalization performance than other incremental extreme learning machine variants. In addition, this paper applies the algorithm to specific weight prediction scene of intelligent manufacturing dynamic scheduling, and also gets good results.
KW - Attenuated regularization term
KW - Dynamic scheduling
KW - Generalization ability
KW - Incremental Extreme Learning Machine
KW - Intelligent manufacturing
KW - Weight distribution
UR - http://www.scopus.com/inward/record.url?scp=85134684445&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-09726-3_17
DO - 10.1007/978-3-031-09726-3_17
M3 - Conference contribution
AN - SCOPUS:85134684445
SN - 9783031097256
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 189
EP - 200
BT - Advances in Swarm Intelligence - 13th International Conference, ICSI 2022, Proceedings, Part II
A2 - Tan, Ying
A2 - Shi, Yuhui
A2 - Niu, Ben
PB - Springer Science and Business Media Deutschland GmbH
T2 - 13th International Conference on Swarm Intelligence, ICSI 2022
Y2 - 15 July 2022 through 19 July 2022
ER -