Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain

Tong Zhang, Peng Gao, Hao Dong, Yin Zhuang*, Guanqun Wang, Wei Zhang, He Chen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

26 引用 (Scopus)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 27
  • Captures
    • Readers: 27
  • Mentions
    • Blog Mentions: 1
    • News Mentions: 1
see details

摘要

Currently, under supervised learning, a model pre-trained by a large-scale nature scene dataset and then fine-tuned on a few specific task labeling data is the paradigm that has dominated knowledge transfer learning. Unfortunately, due to different categories of imaging data and stiff challenges of data annotation, there is not a large enough and uniform remote sensing dataset to support large-scale pre-training in the remote sensing domain (RSD). Moreover, pre-training models on large-scale nature scene datasets by supervised learning and then directly fine-tuning on diverse downstream tasks seems to be a crude method, which is easily affected by inevitable incorrect labeling, severe domain gaps and task-aware discrepancies. Thus, in this paper, considering the self-supervised pre-training and powerful vision transformer (ViT) architecture, a concise and effective knowledge transfer learning strategy called ConSecutive Pre-Training (CSPT) is proposed based on the idea of not stopping pre-training in natural language processing (NLP), which can gradually bridge the domain gap and transfer large-scale data knowledge to any specific domain (e.g., from nature scene domain to RSD) In addition, the proposed CSPT also can release the huge potential of unlabeled data for task-aware model training. Finally, extensive experiments were carried out on twelve remote sensing datasets involving three types of downstream tasks (e.g., scene classification, object detection and land cover classification) and two types of imaging data (e.g., optical and synthetic aperture radar (SAR)). The results show that by utilizing the proposed CSPT for task-aware model training, almost all downstream tasks in the RSD can outperform the previous knowledge transfer learning strategies based on model pre-training without any expensive manually labeling and even surpass the state-of-the-art (SOTA) performance without any careful network architecture designing.

源语言英语
文章编号5675
期刊Remote Sensing
14
22
DOI
出版状态已出版 - 11月 2022

指纹

探究 'Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain' 的科研主题。它们共同构成独一无二的指纹。

引用此

Zhang, T., Gao, P., Dong, H., Zhuang, Y., Wang, G., Zhang, W., & Chen, H. (2022). Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain. Remote Sensing, 14(22), 文章 5675. https://doi.org/10.3390/rs14225675