TY - JOUR
T1 - SELF-LLP
T2 - Self-supervised learning from label proportions with self-ensemble
AU - Liu, Jiabin
AU - Qi, Zhiquan
AU - Wang, Bo
AU - Tian, Ying Jie
AU - Shi, Yong
N1 - Publisher Copyright:
© 2022
PY - 2022/9
Y1 - 2022/9
N2 - In this paper, we tackle the problem called learning from label proportions (LLP), where the training data is arranged into various bags, with only the proportions of different categories in each bag available. Existing efforts mainly focus on training a model with only the limited proportion information in a weakly supervised manner, thus result in apparent performance gap to supervised learning, as well as computational inefficiency. In this work, we propose a multi-task pipeline called SELF-LLP to make full use of the information contained in the data and model themselves. Specifically, to intensively learn representation from the data, we leverage the self-supervised learning as a plug-in auxiliary task to learn better transferable visual representation. The main insight is to benefit from the self-supervised representation learning with deep model, as well as improving classification performance by a large margin. Meanwhile, in order to better leverage the implicit benefits from the model itself, we incorporate the self-ensemble strategy to guide the training process with an auxiliary supervision information, which is constructed by aggregating multiple previous network predictions. Furthermore, a ramp-up mechanism is further employed to stabilize the training process. In the extensive experiments, our method demonstrates compelling advantages in both accuracy and efficiency over several state-of-the-art LLP approaches.
AB - In this paper, we tackle the problem called learning from label proportions (LLP), where the training data is arranged into various bags, with only the proportions of different categories in each bag available. Existing efforts mainly focus on training a model with only the limited proportion information in a weakly supervised manner, thus result in apparent performance gap to supervised learning, as well as computational inefficiency. In this work, we propose a multi-task pipeline called SELF-LLP to make full use of the information contained in the data and model themselves. Specifically, to intensively learn representation from the data, we leverage the self-supervised learning as a plug-in auxiliary task to learn better transferable visual representation. The main insight is to benefit from the self-supervised representation learning with deep model, as well as improving classification performance by a large margin. Meanwhile, in order to better leverage the implicit benefits from the model itself, we incorporate the self-ensemble strategy to guide the training process with an auxiliary supervision information, which is constructed by aggregating multiple previous network predictions. Furthermore, a ramp-up mechanism is further employed to stabilize the training process. In the extensive experiments, our method demonstrates compelling advantages in both accuracy and efficiency over several state-of-the-art LLP approaches.
KW - Learning from label proportion
KW - Multi-task learning
KW - Self-ensemble strategy
KW - Self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85130415876&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2022.108767
DO - 10.1016/j.patcog.2022.108767
M3 - Article
AN - SCOPUS:85130415876
SN - 0031-3203
VL - 129
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 108767
ER -