摘要
Graph neural network has shown impressive ability to capture relations among support(labeled) and query(unlabeled) instances in a few-shot task. It is a feasible way that features are extracted using a pre-trained backbone network, and later adjusted in a few-shot scenario with an episodic meta-trained graph network. However, these adjusted features cannot well represent the few-shot data characteristics owing to the feature distribution mis-match caused by the different optimizations between the backbone and the graph network (multi-class pre-train v.s. episodic meta-train). Additionally, learning from the limited support instances fails to depict true data distributions thus cause incorrect class allocation. In this paper, we propose to transform the features extracted by a pre-trained self-supervised feature extractor into a Gaussian-like distribution to reduce the feature distribution mis-match, which significantly benefits the later meta-training of the graph network. To tackle the incorrect class allocation, we propose to leverage support and query instances to estimate class centers by computing an optimal class allocation matrix. Extensive experiments on few-shot benchmarks demonstrate that our graph-based few-shot learning pipeline outperforms baseline by 12%, and surpasses state-of-the-art results by a large margin under both full-supervised and semi-supervised settings.
源语言 | 英语 |
---|---|
页(从-至) | 247-256 |
页数 | 10 |
期刊 | Neurocomputing |
卷 | 470 |
DOI | |
出版状态 | 已出版 - 22 1月 2022 |