Towards interpreting deep neural networks via layer behavior understanding

Jiezhang Cao, Jincheng Li, Xiping Hu, Xiangmiao Wu*, Mingkui Tan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Deep neural networks (DNNs) have achieved success in many machine learning tasks. However, how to interpret DNNs is still an open problem. In particular, how do hidden layers behave is not clearly understood. In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by “monitoring” the distribution evolution for both across-layer and single-layer along the depth and training epochs, respectively. Relying on the optimal transport theory, we employ the Wasserstein distance (W-distance) to measure the divergence between the layer distribution and the target distribution. Theoretically, we prove that (i) the W-distance between the distribution of any layer and the target distribution tends to decrease along the depth; (ii) for a specific layer, the W-distance between the distribution in an iteration and the target distribution tends to decrease along training epochs; (iii) a deeper layer, however, is not always better than a shallower layer. Relying on these properties, we are able to propose an early-exit inference method to improve the performance of the multi-label classification. Moreover, our results help to analyze the stability of layer distributions and explain why auxiliary losses are helpful in training DNNs. Extensive experiments justify our theoretical findings.

Original languageEnglish
Pages (from-to)1159-1179
Number of pages21
JournalMachine Learning
Volume111
Issue number3
DOIs
Publication statusPublished - Mar 2022
Externally publishedYes

Keywords

  • Layer behavior
  • Teacher-student paradigm
  • Wasserstein distance

Fingerprint

Dive into the research topics of 'Towards interpreting deep neural networks via layer behavior understanding'. Together they form a unique fingerprint.

Cite this