Robust boosting via self-sampling

  • Xiaoshuang Liu
  • , Senlin Luo
  • , Limin Pan*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Boosting is a widely used ensemble meta-algorithm due to its excellent performance in combining weak learners into a strong learner. However, vanilla boosting methods are proved to be sensitive to noise due to the lack of restriction to the always misclassified samples during iterations. In this work, we present a new aspect to overcome the sensitiveness problem by combining the self-sampling learning framework, which can pursue reliable samples and smooth the training process based on the sample reliability measurement designed for the boosting procedure. Experimental results on the synthetic data and several real-world datasets show that the self-sampling regime can automatically optimize an appropriate training subset in different noise environments, and robust boosting algorithms we proposed outperform the state-of-the-art methods.

Original languageEnglish
Article number105424
JournalKnowledge-Based Systems
Volume193
DOIs
Publication statusPublished - 6 Apr 2020

Keywords

  • Boosting
  • Loss function
  • Robustness
  • Self-sampling

Fingerprint

Dive into the research topics of 'Robust boosting via self-sampling'. Together they form a unique fingerprint.

Cite this