TY - JOUR
T1 - Inverse partitioned matrix-based semi-random incremental ELM for regression
AU - Zeng, Guoqiang
AU - Yao, Fenxi
AU - Zhang, Baihai
N1 - Publisher Copyright:
© 2019, Springer-Verlag London Ltd., part of Springer Nature.
PY - 2020/9/1
Y1 - 2020/9/1
N2 - Incremental extreme learning machine has been verified that it has the universal approximation capability. However, there are two major issues lowering its efficiency: one is that some “random” hidden nodes are inefficient which decrease the convergence rate and increase the structural complexity, the other is that the final output weight vector is not the minimum norm least-squares solution which decreases the generalization capability. To settle these issues, this paper proposes a simple and efficient algorithm in which the parameters of even hidden nodes are calculated by fitting the residual error vector in the previous phase, and then, all existing output weights are recursively updated based on inverse partitioned matrix. The algorithm can reduce the inefficient hidden nodes and obtain a preferable output weight vector which is always the minimum norm least-squares solution. Theoretical analyses and experimental results show that the proposed algorithm has better performance on convergence rate, generalization capability and structural complexity than other incremental extreme learning machine algorithms.
AB - Incremental extreme learning machine has been verified that it has the universal approximation capability. However, there are two major issues lowering its efficiency: one is that some “random” hidden nodes are inefficient which decrease the convergence rate and increase the structural complexity, the other is that the final output weight vector is not the minimum norm least-squares solution which decreases the generalization capability. To settle these issues, this paper proposes a simple and efficient algorithm in which the parameters of even hidden nodes are calculated by fitting the residual error vector in the previous phase, and then, all existing output weights are recursively updated based on inverse partitioned matrix. The algorithm can reduce the inefficient hidden nodes and obtain a preferable output weight vector which is always the minimum norm least-squares solution. Theoretical analyses and experimental results show that the proposed algorithm has better performance on convergence rate, generalization capability and structural complexity than other incremental extreme learning machine algorithms.
KW - Convergence rate
KW - Extreme learning machine
KW - Inefficient nodes
KW - Inverse partitioned matrix
KW - The minimum norm least-squares solution
UR - http://www.scopus.com/inward/record.url?scp=85067314515&partnerID=8YFLogxK
U2 - 10.1007/s00521-019-04289-4
DO - 10.1007/s00521-019-04289-4
M3 - Article
AN - SCOPUS:85067314515
SN - 0941-0643
VL - 32
SP - 14263
EP - 14274
JO - Neural Computing and Applications
JF - Neural Computing and Applications
IS - 18
ER -