On Robust Grouping Active Learning

Changsheng Li*, Chen Yang, Lingyan Liang, Ye Yuan, Guoren Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Early active learning, in a common paradigm, usually selects representative samples for human annotating. This aligns with the goal of minimizing the overall reconstruction error in an unsupervised manner. While existing methods mainly focus on data samples that are drawn from individual yet high-dimensional feature space, they can hardly handle the real-world scenario where samples are often represented by low-dimensional features drawn from multiple groups (subspaces). In this case, how to leverage the grouping structure to select most representative samples becomes the key point to success. In this paper, we propose an unsupervised active learning framework, called Robust Grouping Active Learning (RGAL), to achieve this goal. The key idea is to take into account of different degrees of information shared across data groups. Specifically in RGAL, we assume data from some group can be embedded in a low-dimensional space, as well as that the data distributions of different groups can overlap with each other to a certain degree. And RGAL controls such group overlaps by imposing sparsity constraints on a matrix of reconstruction coefficients. To encourage a smooth coefficient space, we also enforce a robust loss with Laplacian regularization for noise suppression. We perform extensive experiments on multiple tasks which normally require costly human annotation, including facial age estimation, video action recognition and medical image classification. Results on benchmark datasets clearly demonstrate the efficacy of our RGAL method compared state-of-the-art methods.

源语言英语
页(从-至)103-112
页数10
期刊IEEE Transactions on Emerging Topics in Computational Intelligence
6
1
DOI
出版状态已出版 - 1 2月 2022

指纹

探究 'On Robust Grouping Active Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此