Rethinking Motivation of Deep Neural Architectures

Weilin Luo, Jinhu Lu*, Xuerong Li, Lei Chen, Kexin Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Nowadays, deep neural architectures have acquired great achievements in many domains, such as image processing and natural language processing. In this paper, we hope to provide new perspectives for the future exploration of novel artificial neural architectures via reviewing the proposal and development of existing architectures. We first roughly divide the influence domain of intrinsic motivations on some common deep neural architectures into three categories: information processing, information transmission and learning strategy. Furthermore, to illustrate how deep neural architectures are motivated and developed, motivation and architecture details of three deep neural networks, namely convolutional neural network (CNN), recurrent neural network (RNN) and generative adversarial network (GAN), are introduced respectively. Moreover, the evolution of these neural architectures are also elaborated in this paper. At last, this review is concluded and several promising research topics about deep neural architectures in the future are discussed.

Original languageEnglish
Article number9258442
Pages (from-to)65-76
Number of pages12
JournalIEEE Circuits and Systems Magazine
Volume20
Issue number4
DOIs
Publication statusPublished - 1 Oct 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Rethinking Motivation of Deep Neural Architectures'. Together they form a unique fingerprint.

Cite this