TY - JOUR
T1 - Random Polynomial Neural Networks
T2 - Analysis and Design
AU - Huang, Wei
AU - Xiao, Yueyue
AU - Oh, Sung Kwun
AU - Pedrycz, Witold
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2024
Y1 - 2024
N2 - In this article, we propose the concept of random polynomial neural networks (RPNNs) realized based on the architecture of polynomial neural networks (PNNs) with random polynomial neurons (RPNs). RPNs exhibit generalized polynomial neurons (PNs) based on random forest (RF) architecture. In the design of RPNs, the target variables are no longer directly used in conventional decision trees, and the polynomial of these target variables is exploited here to determine the average prediction. Unlike the conventional performance index used in the selection of PNs, the correlation coefficient is adopted here to select the RPNs of each layer. When compared with the conventional PNs used in PNNs, the proposed RPNs exhibit the following advantages: first, RPNs are insensitive to outliers; second, RPNs can obtain the importance of each input variable after training; third, RPNs can alleviate the overfitting problem with the use of an RF structure. The overall nonlinearity of a complex system is captured by means of PNNs. Moreover, particle swarm optimization (PSO) is exploited to optimize the parameters when constructing RPNNs. The RPNNs take advantage of both RF and PNNs: it exhibits high accuracy based on ensemble learning used in the RF and is beneficial to describe high-order nonlinear relations between input and output variables stemming from PNNs. Experimental results based on a series of well-known modeling benchmarks illustrate that the proposed RPNNs outperform other state-of-the-art models reported in the literature.
AB - In this article, we propose the concept of random polynomial neural networks (RPNNs) realized based on the architecture of polynomial neural networks (PNNs) with random polynomial neurons (RPNs). RPNs exhibit generalized polynomial neurons (PNs) based on random forest (RF) architecture. In the design of RPNs, the target variables are no longer directly used in conventional decision trees, and the polynomial of these target variables is exploited here to determine the average prediction. Unlike the conventional performance index used in the selection of PNs, the correlation coefficient is adopted here to select the RPNs of each layer. When compared with the conventional PNs used in PNNs, the proposed RPNs exhibit the following advantages: first, RPNs are insensitive to outliers; second, RPNs can obtain the importance of each input variable after training; third, RPNs can alleviate the overfitting problem with the use of an RF structure. The overall nonlinearity of a complex system is captured by means of PNNs. Moreover, particle swarm optimization (PSO) is exploited to optimize the parameters when constructing RPNNs. The RPNNs take advantage of both RF and PNNs: it exhibits high accuracy based on ensemble learning used in the RF and is beneficial to describe high-order nonlinear relations between input and output variables stemming from PNNs. Experimental results based on a series of well-known modeling benchmarks illustrate that the proposed RPNNs outperform other state-of-the-art models reported in the literature.
KW - Particle swarm optimization (PSO)
KW - polynomial neural networks (PNNs)
KW - random polynomial neural networks (RPNNs)
KW - random polynomial neurons (RPNs)
UR - http://www.scopus.com/inward/record.url?scp=85164386741&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3288577
DO - 10.1109/TNNLS.2023.3288577
M3 - Article
AN - SCOPUS:85164386741
SN - 2162-237X
VL - 35
SP - 15589
EP - 15599
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 11
ER -