Abstract
In recent years, deep learning has been extensively studied as a new way to train multilayer neural networks. Deep learning is a set of algorithms in machine learning, which attempts to model high-level abstractions in input data by using multiple nonlinear transformations. Many great achievements of deep learning have been made in speech recognition, computer vision, and natural language processing. Considering that data volume increases rapidly, deep learning becomes more and more important in predictive analytics of big data. We need tens of millions of parameters and billions of samples to train a high quality and practical deep learning model. As the number of parameters and training data are still growing rapidly in the Big Data era, the speed to train a practical model is limited by sequential algorithms and intensive data computation. Therefore, deep learning has been accelerated in parallel with GPUs and clusters in recent years. This chapter introduces several mainstream deep learning approaches developed over the past decades, as well as optimization methods for deep learning in parallel.
| Original language | English |
|---|---|
| Title of host publication | Big Data |
| Subtitle of host publication | Principles and Paradigms |
| Publisher | Elsevier Inc. |
| Pages | 95-118 |
| Number of pages | 24 |
| ISBN (Electronic) | 9780128093467 |
| ISBN (Print) | 9780128053942 |
| DOIs | |
| Publication status | Published - 3 Jun 2016 |
| Externally published | Yes |
Keywords
- Big Data analytics
- CUDA
- Deep learning
- GPU
- Machine learning
- Parallel computing
Fingerprint
Dive into the research topics of 'Deep Learning and Its Parallelization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver