Making Deep Neural Networks Robust to Label Noise: Cross-Training with a Novel Loss Function

Zhen Qin, Zhengwen Zhang, Yan Li*, Jun Guo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

Deep neural networks (DNNs) have achieved astonishing results on a variety of supervised learning tasks owing to a large scale of well-labeled training data. However, as recent researches have pointed out, the generalization performance of DNNs is likely to sharply deteriorate when training data contains label noise. In order to address this problem, a novel loss function is proposed to guide DNNs to pay more attention to clean samples via adaptively weighing the traditional cross-entropy loss. Under the guidance of this loss function, a cross-training strategy is designed by leveraging two synergic DNN models, each of which plays the roles of both updating its own parameters and generating curriculums for the other one. In addition, this paper further proposes an online data filtration mechanism and integrates it into the final cross-training framework, which simultaneously optimizes DNN models and filters out noisy samples. The proposed approach is evaluated through a great deal of experiments on several benchmark datasets with man-made or real-world label noise, and the results have demonstrated its robustness to different noise types and noise scales.

Original languageEnglish
Article number8834773
Pages (from-to)130893-130902
Number of pages10
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019

Keywords

  • Deep neural networks
  • cross-training
  • data filtration
  • label noise
  • loss function

Fingerprint

Dive into the research topics of 'Making Deep Neural Networks Robust to Label Noise: Cross-Training with a Novel Loss Function'. Together they form a unique fingerprint.

Cite this