An experimental evaluation of extreme learning machines on several hardware devices

Liang Li, Guoren Wang*, Gang Wu, Qi Zhang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

As an important learning algorithm, extreme learning machine (ELM) is known for its excellent learning speed. With the expansion of ELM’s applications in the field of classification and regression, the need for its real-time performance is increasing. Although the use of hardware acceleration is an obvious solution, how to select the appropriate acceleration hardware for ELM-based applications is a topic worthy of further discussion. For this purpose, we designed and evaluated the optimized ELM algorithms on three kinds of state-of-the-art acceleration hardware, i.e., multi-core CPU, Graphics Processing Unit (GPU), and Field-Programmable Gate Array (FPGA) which are all suitable for matrix multiplication optimization. The experimental results showed that the speedup ratio of these optimized algorithms on acceleration hardware achieved 10–800. Therefore, we suggest that (1) use GPU to accelerate ELM algorithms for large dataset, and (2) use FPGA for small dataset because of its lower power, especially for some embedded applications. We also opened our source code.

源语言英语
页(从-至)14385-14397
页数13
期刊Neural Computing and Applications
32
18
DOI
出版状态已出版 - 1 9月 2020

指纹

探究 'An experimental evaluation of extreme learning machines on several hardware devices' 的科研主题。它们共同构成独一无二的指纹。

引用此