TY - JOUR
T1 - Modified bidirectional extreme learning machine with Gram–Schmidt orthogonalization method
AU - Zeng, Guoqiang
AU - Zhang, Baihai
AU - Yao, Fenxi
AU - Chai, Senchun
N1 - Publisher Copyright:
© 2018 Elsevier B.V.
PY - 2018/11/17
Y1 - 2018/11/17
N2 - Incremental extreme learning machine has been proved to be an efficient and simple universal approximator. However, the network architecture may be very large due to the inefficient nodes which have a tiny effect on reducing the residual error. More to the point, the output weights are not the least square solution. To reduce such inefficient nodes, a method called bidirectional ELM (B-ELM), which analytically calculates the input weights of even nodes, was proposed. By analyzing, B-ELM can be further improved to achieve better performance on compacting structure. This paper proposes the modified B-ELM (MB-ELM), in which the orthogonalization method is involved in B-ELM to orthogonalize the output vectors of hidden nodes and the resulting vectors are taken as the output vectors. MB-ELM can greatly diminish inefficient nodes and obtain a preferable output weight vector which is the least square solution, so that it has better convergence rate and a more compact network architecture. Specifically, it has been proved that in theory, MB-ELM can reduce residual error to zero by adding only two nodes into network. Simulation results verify these conclusions and show that MB-ELM can reach smaller low limit of residual error than other I-ELM methods.
AB - Incremental extreme learning machine has been proved to be an efficient and simple universal approximator. However, the network architecture may be very large due to the inefficient nodes which have a tiny effect on reducing the residual error. More to the point, the output weights are not the least square solution. To reduce such inefficient nodes, a method called bidirectional ELM (B-ELM), which analytically calculates the input weights of even nodes, was proposed. By analyzing, B-ELM can be further improved to achieve better performance on compacting structure. This paper proposes the modified B-ELM (MB-ELM), in which the orthogonalization method is involved in B-ELM to orthogonalize the output vectors of hidden nodes and the resulting vectors are taken as the output vectors. MB-ELM can greatly diminish inefficient nodes and obtain a preferable output weight vector which is the least square solution, so that it has better convergence rate and a more compact network architecture. Specifically, it has been proved that in theory, MB-ELM can reduce residual error to zero by adding only two nodes into network. Simulation results verify these conclusions and show that MB-ELM can reach smaller low limit of residual error than other I-ELM methods.
KW - Incremental extreme learning machine
KW - Inefficient nodes
KW - Least square solution
KW - Orthogonalization method
KW - Universal approximator
UR - http://www.scopus.com/inward/record.url?scp=85052957009&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2018.08.029
DO - 10.1016/j.neucom.2018.08.029
M3 - Article
AN - SCOPUS:85052957009
SN - 0925-2312
VL - 316
SP - 405
EP - 414
JO - Neurocomputing
JF - Neurocomputing
ER -