TY - JOUR
T1 - Conditional Random Mapping for Effective ELM Feature Representation
AU - Li, Cheng
AU - Deng, Chenwei
AU - Zhou, Shichao
AU - Zhao, Baojun
AU - Huang, Guang Bin
N1 - Publisher Copyright:
© 2018, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2018/10/1
Y1 - 2018/10/1
N2 - Extreme learning machine (ELM) has been extensively studied, due to its fast training and good generalization. Unfortunately, the existing ELM-based feature representation methods are uncompetitive with state-of-the-art deep neural networks (DNNs) when conducting some complex visual recognition tasks. This weakness is mainly caused by two critical defects: (1) random feature mappings (RFM) by ad hoc probability distribution is unable to well project various input data into discriminative feature spaces; (2) in the ELM-based hierarchical architectures, features from previous layer are scattered via RFM in the current layer, which leads to abstracting higher level features ineffectively. To address these issues, we aim to take advantage of label information for optimizing random mapping in the ELM, utilizing an efficient label alignment metric to learn a conditional random feature mapping (CRFM) in a supervised manner. Moreover, we proposed a new CRFM-based single-layer ELM (CELM) and then extended CELM to the supervised multi-layer learning architecture (ML-CELM). Extensive experiments on various widely used datasets demonstrate our approach is more effective than original ELM-based and other existing DNN feature representation methods with rapid training/testing speed. The proposed CELM and ML-CELM are able to achieve discriminative and robust feature representation, and have shown superiority in various simulations in terms of generalization and speed.
AB - Extreme learning machine (ELM) has been extensively studied, due to its fast training and good generalization. Unfortunately, the existing ELM-based feature representation methods are uncompetitive with state-of-the-art deep neural networks (DNNs) when conducting some complex visual recognition tasks. This weakness is mainly caused by two critical defects: (1) random feature mappings (RFM) by ad hoc probability distribution is unable to well project various input data into discriminative feature spaces; (2) in the ELM-based hierarchical architectures, features from previous layer are scattered via RFM in the current layer, which leads to abstracting higher level features ineffectively. To address these issues, we aim to take advantage of label information for optimizing random mapping in the ELM, utilizing an efficient label alignment metric to learn a conditional random feature mapping (CRFM) in a supervised manner. Moreover, we proposed a new CRFM-based single-layer ELM (CELM) and then extended CELM to the supervised multi-layer learning architecture (ML-CELM). Extensive experiments on various widely used datasets demonstrate our approach is more effective than original ELM-based and other existing DNN feature representation methods with rapid training/testing speed. The proposed CELM and ML-CELM are able to achieve discriminative and robust feature representation, and have shown superiority in various simulations in terms of generalization and speed.
KW - CELM
KW - Conditional random feature mapping
KW - Extreme learning machine
KW - ML-CELM
UR - https://www.scopus.com/pages/publications/85046824283
U2 - 10.1007/s12559-018-9557-x
DO - 10.1007/s12559-018-9557-x
M3 - Article
AN - SCOPUS:85046824283
SN - 1866-9956
VL - 10
SP - 827
EP - 847
JO - Cognitive Computation
JF - Cognitive Computation
IS - 5
ER -