TY - JOUR
T1 - Sigma-Pi Cascade Extended Hybrid Neural Network
AU - Iyoda, Eduardo Masato
AU - Hirota, Kaoru
AU - Zuben, Fernando J.Von
N1 - Publisher Copyright:
© 2002 Fuji Technology Press. All rights reserved.
PY - 2002/10
Y1 - 2002/10
N2 - A nonparametric neural architecture called the Sigma-Pi Cascade extended Hybrid Neural Network σπ-(CHNN) is proposed to extend approximation capabilities in neural architectures such as Projection Pursuit Learning (PPL) and Hybrid Neural Networks (HNN). Like PPL and HNN, σπ-CHNN also uses distinct activation functions in its neurons but, unlike these previous neural architectures, it may consider multiplicative operators in its hidden neurons, enabling it to extract higher-order information from given data. σπ-CHNN uses arbitrary connectivity patterns among neurons. An evolutionary learning algorithm combined with a conjugate gradient algorithm is proposed to automatically design the topology and weights of σπ-CHNN. σπ-CHNN performance is evaluated in five benchmark regression problems. Results show that σπ-CHNN provides competitive performance compared to PPL and HNN in most problems, either in computational requirements to implement the proposed neural architecture or in approximation accuracy. In some problems, σπ-CHNN reduces the approximation error on the order of 10-1 compared to PPL and HNN, whereas in other cases it achieves the same approximation error as these neural architectures but uses a smaller number of hidden neurons (usually 1 hidden neuron less than PPL and HNN).
AB - A nonparametric neural architecture called the Sigma-Pi Cascade extended Hybrid Neural Network σπ-(CHNN) is proposed to extend approximation capabilities in neural architectures such as Projection Pursuit Learning (PPL) and Hybrid Neural Networks (HNN). Like PPL and HNN, σπ-CHNN also uses distinct activation functions in its neurons but, unlike these previous neural architectures, it may consider multiplicative operators in its hidden neurons, enabling it to extract higher-order information from given data. σπ-CHNN uses arbitrary connectivity patterns among neurons. An evolutionary learning algorithm combined with a conjugate gradient algorithm is proposed to automatically design the topology and weights of σπ-CHNN. σπ-CHNN performance is evaluated in five benchmark regression problems. Results show that σπ-CHNN provides competitive performance compared to PPL and HNN in most problems, either in computational requirements to implement the proposed neural architecture or in approximation accuracy. In some problems, σπ-CHNN reduces the approximation error on the order of 10-1 compared to PPL and HNN, whereas in other cases it achieves the same approximation error as these neural architectures but uses a smaller number of hidden neurons (usually 1 hidden neuron less than PPL and HNN).
KW - artificial neural networks
KW - function approximation
KW - genetic algorithm
KW - nonparametric learning
UR - http://www.scopus.com/inward/record.url?scp=0842291395&partnerID=8YFLogxK
U2 - 10.20965/jaciii.2002.p0126
DO - 10.20965/jaciii.2002.p0126
M3 - Article
AN - SCOPUS:0842291395
SN - 1343-0130
VL - 6
SP - 126
EP - 134
JO - Journal of Advanced Computational Intelligence and Intelligent Informatics
JF - Journal of Advanced Computational Intelligence and Intelligent Informatics
IS - 3
ER -