Abstract
Liao et al. (Neurocomputing 128:81–87, 2014) proposed a meta-learning approach to extreme learning machine (Meta-ELM), which can obtain good generalization performance by training multiple ELMs. However, one of its open problems is overfitting when minimizing training error. In this paper, we propose an improved meta-learning model of ELM (improved Meta-ELM) to handle the problem. The improved Meta-ELM architecture is composed of some base ELMs which are error feedback incremental extreme learning machine (EFI-ELM) and the top ELM. The improved Meta-ELM includes two stages. First, each base ELM with EFI-ELM is trained on a subset of training data. Then, the top ELM learns with the base ELMs as hidden nodes. Simulation results on some artificial and benchmark datasets show that the proposed improved Meta-ELM model is more feasible and effective than Meta-ELM.
Original language | English |
---|---|
Pages (from-to) | 3363-3370 |
Number of pages | 8 |
Journal | Neural Computing and Applications |
Volume | 30 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Dec 2018 |
Keywords
- EFI-ELM
- Heterogeneous
- Meta-ELM
- Overfitting