Distributed extreme learning machine with kernels based on mapreduce

Xin Bi, Xiangguo Zhao*, Guoren Wang, Pan Zhang, Chao Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

42 引用 (Scopus)

摘要

Extreme Learning Machine (ELM) has shown its good generalization performance and extremely fast learning speed in many learning applications. Recently, it has been proved that ELM outperforms Support Vector Machine (SVM) with less constraints from the optimization point of view. ELM provides unified learning schemes with a widespread type of feature mappings. Among these unified algorithms, ELM with kernels applies kernels instead of random feature mappings. However, with the exponentially increasing volume of training data in massive learning applications, centralized ELM with kernels suffers from the great memory consumption of large matrix operations. Besides, due to the high communication cost, some of these matrix operations cannot be directly implemented on shared-nothing distributed computing model like MapReduce. This paper proposes a distributed solution named Distributed Kernelized ELM (DK-ELM), which realizes an implementation of ELM with kernels on MapReduce. Distributed kernel matrix calculation and multiplication of matrix with vector are also applied to realize parallel calculation of DK-ELM. Extensive experiments on massive datasets are conducted to verify both the scalability and training performance of DK-ELM. Experimental results show that DK-ELM has good scalability for massive learning applications.

源语言英语
页(从-至)456-463
页数8
期刊Neurocomputing
149
Part A
DOI
出版状态已出版 - 3 2月 2015
已对外发布

指纹

探究 'Distributed extreme learning machine with kernels based on mapreduce' 的科研主题。它们共同构成独一无二的指纹。

引用此