TY - JOUR
T1 - Translated Multiplicative Neuron
T2 - An Extended Multiplicative Neuron that can Translate Decision Surfaces
AU - Iyoda, Eduardo Masato
AU - Nobuhara, Hajime
AU - Hirota, Kaoru
N1 - Publisher Copyright:
© Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).
PY - 2004/9
Y1 - 2004/9
N2 - A multiplicative neuron model called translated multiplicative neuron (πt-neuron) is proposed. Compared to the traditional π-neuron, the πt-neuron presents 2 advantages: (1) it can generate decision surfaces centered at any point of its input space; and (2) πt-neuron has a meaningful set of adjustable parameters. Learning rules for πt-neurons are derived using the error backpropagation procedure. It is shown that the XOR and N-bit parity problems can be perfectly solved using only 1 πt-neuron, with no need for hidden neurons. The πt-neuron is also evaluated in Hwang’s regression benchmark problems, in which neural networks composed of πt-neurons in the hidden layer can perform better than conventional multilayer perceptrons (MLP) in almost all cases: Errors are reduced an average of 58% using about 33% fewer hidden neurons than MLP.
AB - A multiplicative neuron model called translated multiplicative neuron (πt-neuron) is proposed. Compared to the traditional π-neuron, the πt-neuron presents 2 advantages: (1) it can generate decision surfaces centered at any point of its input space; and (2) πt-neuron has a meaningful set of adjustable parameters. Learning rules for πt-neurons are derived using the error backpropagation procedure. It is shown that the XOR and N-bit parity problems can be perfectly solved using only 1 πt-neuron, with no need for hidden neurons. The πt-neuron is also evaluated in Hwang’s regression benchmark problems, in which neural networks composed of πt-neurons in the hidden layer can perform better than conventional multilayer perceptrons (MLP) in almost all cases: Errors are reduced an average of 58% using about 33% fewer hidden neurons than MLP.
KW - N-bit parity problem
KW - XOR problem
KW - multiplicative neurons
KW - neural networks
KW - nonlinear regression
UR - http://www.scopus.com/inward/record.url?scp=33747620260&partnerID=8YFLogxK
U2 - 10.20965/jaciii.2004.p0460
DO - 10.20965/jaciii.2004.p0460
M3 - Article
AN - SCOPUS:33747620260
SN - 1343-0130
VL - 8
SP - 460
EP - 468
JO - Journal of Advanced Computational Intelligence and Intelligent Informatics
JF - Journal of Advanced Computational Intelligence and Intelligent Informatics
IS - 5
ER -