摘要
Incremental Extreme Learning Machine (I-ELM) is a typical constructive feed-forward neural network with random hidden nodes, which can automatically determine the appropriate number of hidden nodes. However I-ELM and its variants suffer from a notorious problem, that is, the input parameters of these algorithms are randomly assigned and kept fixed throughout the training process, which results in a very unstable performance of the model. To solve this problem, we propose a novel Back-Propagation ELM (BP-ELM) in this study, which can dynamically assign the most appropriate input parameters according to the current residual error of the model during the increasing process of the hidden nodes. In this way, BP-ELM can greatly improve the quality of newly added nodes and then accelerate the convergence rate and improve the model performance. Moreover, under the same error level, the network structure of the model obtained by BP-ELM is more compact than that of the I-ELM. We also prove the universal approximation ability of BP-ELM in this study. Experimental results on three benchmark regression problems and a real-life traffic flow prediction problem empirically show that BP-ELM has better stability and generalization ability than other I-ELM-based algorithms.
源语言 | 英语 |
---|---|
页(从-至) | 9179-9188 |
页数 | 10 |
期刊 | Soft Computing |
卷 | 26 |
期 | 18 |
DOI | |
出版状态 | 已出版 - 9月 2022 |