Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A few-shot high-resolution remote sensing image semantic segmentation method
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 01 April 2026

A few-shot high-resolution remote sensing image semantic segmentation method

  • Han-Lin Jiang1,
  • Ning Wang1,
  • Bo Geng5,
  • Zu-Kui Li6,
  • Rong-Hai Wu1,
  • Xiao-Wei Li1,
  • Ben-Hui Chen4,
  • En-Ming Zhao3,
  • Guo-Peng Ren2,
  • Mei Zhang1 &
  • …
  • Deng-Qi Yang1 

Scientific Reports , Article number:  (2026) Cite this article

  • 16 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Biological techniques
  • Computational biology and bioinformatics
  • Environmental sciences

Abstract

Semantic segmentation of high-resolution Unmanned Aerial Vehicle (UAV) remote sensing images plays a crucial role in environmental monitoring, urban planning, agricultural assessment, and disaster management. Semantic segmentation methods that are based on deep learning have demonstrated superior performance; however, they rely on large amounts of annotated data, and thus their performance significantly degrades in small-sample scenarios. To obtain better performance on small-scale remote sensing semantic segmentation datasets, methods combining knowledge distillation and semi-supervised learning are proposed. These methods use models pre-trained on large-scale natural image datasets (such as ImageNet) to guide the training of student models on target datasets directly, achieving significant performance gains. However, the feature distribution of natural image datasets differs significantly from that of remote sensing image datasets. Therefore, student models, directly guided by teacher models pre-trained on natural image datasets, often struggle to obtain the optimal performance, especially when few samples are labeled in the target domain. Whether introducing a medium-scale remote sensing dataset as an intermediate domain between natural image datasets and the target remote sensing dataset can further improve model performance is a question worth exploring. This study proposed a few-shot remote sensing image semantic segmentation method that combined multi-stage knowledge distillation (MKD) and semi-supervised learning (SSL) to progressively bridge domain gaps and leverage unlabeled data. The experimental results on the Erhai UAV dataset (EH) show that the proposed MKD + SSL method achieves a mean IoU of 77.05% with only 880 labeled samples, outperforming the widely used single-stage KD method by + 3.06% mIoU, with per-class IoU gains up to +(2.17% − 5.21%). On the Cityscapes benchmark, our framework further surpasses state-of-the-art methods such as UniMatch, achieving a + 1.5% and + 1.4% improvement in mIoU under 1/16 and 1/8 labeled settings, respectively. These results demonstrate that the proposed method effectively enhances segmentation accuracy in few-shot settings and generalizes well across diverse datasets, with wide practical value.

Data availability

The model codes and additional meta-data can be accessed on github (https://github.com/Mrjianghanlin/A-Few-Shot-High-Resolution-Remote-Sensing-Image-Semantic-Segmentation-Method3). The research utilizes three key datasets: (A) The UAV remote sensing data of the EH dataset can be accessed on Google Drive (https://drive.google.com/drive/folders/14YghGdWH-DzJy3sfTEQy0Tj3zhJtwd7Y?usp=sharing). (B) The high-resolution remote sensing images of the HW dataset are available on AI Studio (https://aistudio.baidu.com/datasetdetail/54302/0).C) Urban scene images of the Cityscapes dataset are freely accessible on the official website (https://www.cityscapes-dataset.com/). If you are unable to access any of these datasets, please contact the Han-Lin Jiang author (email: 15691552855@163.com) for assistance.

References

  1. Si, B. et al. ABNet: An aggregated backbone network architecture for fine landcover classification. Remote Sens. 16(10), 1725 (2024).

    Google Scholar 

  2. Khan, B. A. & Jung, J.-W. Semantic segmentation of aerial imagery using U-Net with self-attention and separable convolutions. Appl. Sci. 14(9), 3712 (2024).

    Google Scholar 

  3. Lu, Y. et al. Multi-dimensional manifolds consistency regularization for semi-supervised remote sensing semantic segmentation. Knowl. Based Syst. 299, 112032 (2024).

    Google Scholar 

  4. Shen, X. et al. Multi-scale feature aggregation network for semantic segmentation of land cover. Remote Sens. 14(23), 6156 (2022).

    Google Scholar 

  5. He, K. et al. Mask r-cnn. In Proceedings of the Proceedings of the IEEE international conference on computer vision (2017).

  6. He, K. et al. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016).

  7. Huang, G. et al. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (2017).

  8. Lin, T-Y. et al. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (2017).

  9. Zheng, G. et al. Deep semantic segmentation of unmanned aerial vehicle remote sensing images based on fully convolutional neural network. Front. Earth Sci. 11, 1115805 (2023).

    Google Scholar 

  10. Li, G. et al. Adaptive prototype learning and allocation for few-shot segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2021).

  11. Xie, G-S. et al. Scale-aware graph neural network for few-shot semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2021).

  12. Yang, B. et al. Prototype mixture models for few-shot semantic segmentation. In Proceedings of the Computer Vision–ECCV. : 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VIII 16, F, 2020 (Springer, 2020).

  13. Yuan, X., Shi, J. & Gu, L. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 169, 114417 (2021).

    Google Scholar 

  14. Li, Y. et al. Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 175, 20–33 (2021).

    Google Scholar 

  15. Cordts, M. et al. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016).

  16. Lyu, Y. et al. UAVid: A semantic segmentation dataset for UAV imagery. ISPRS J. Photogramm. Remote Sens. 165, 108–119 (2020).

    Google Scholar 

  17. Chen, Z. et al. Road extraction in remote sensing data: A survey. Int. J. Appl. Earth Obs. Geoinf. 112, 102833 (2022).

    Google Scholar 

  18. Liu, P. et al. Survey of road extraction methods in remote sensing images based on deep learning. PFG–J. Photogramm., Remote Sens. Geoinf. Sci. 90(2), 135–59 (2022).

    Google Scholar 

  19. Hung, W-C. et al. Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:180207934 (2018).

  20. Chen, H. et al. SemiRoadExNet: A semi-supervised network for road extraction from remote sensing imagery via adversarial learning. ISPRS J. Photogramm. Remote Sens. 198, 169–183 (2023).

    Google Scholar 

  21. Tan, B. et al. Transitive transfer learning. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015).

  22. Recht, B. et al. Do imagenet classifiers generalize to imagenet? In Proceedings of the International conference on machine learning (PMLR, 2019).

  23. Marmanis, D. et al. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 13(1), 105–109 (2015).

    Google Scholar 

  24. Hoyer, L. et al. Improving semi-supervised and domain-adaptive semantic segmentation with self-supervised depth estimation. Int. J. Comput. Vis. 131(8), 2070–96 (2023).

    Google Scholar 

  25. Gadiraju, K. K. & Vatsavai, R. R. Comparative analysis of deep transfer learning performance on crop classification. In Proceedings of the 9th ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data (2020).

  26. Hinton, G. Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:150302531 (2015).

  27. Wang, Y. et al. Intra-class feature variation distillation for semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16, F, 2020. (Springer, 2020).

  28. Park, S. & Heo, Y. S. Knowledge distillation for semantic segmentation using channel and spatial correlations and adaptive cross entropy. Sensors 20(16), 4616 (2020).

    Google Scholar 

  29. Liu, Y. et al. Structured knowledge distillation for semantic segmentation. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2019).

  30. Brostow, G. J., Fauqueur, J. & Cipolla, R. Semantic object classes in video: A high-definition ground truth database. Pattern Recognit. Lett. 30(2), 88–97 (2009).

    Google Scholar 

  31. Everingham, M. et al. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88, 303–338 (2010).

    Google Scholar 

  32. Chen, L-C. et al. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:170605587 (2017).

  33. Cui, B., Chen, X. & Lu, Y. Semantic segmentation of remote sensing images using transfer learning and deep convolutional neural network with dense connection. IEEE Access 8, 116744–55 (2020).

    Google Scholar 

  34. Vu, T-H. et al. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2019).

  35. Laine, S. & Aila, T. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:161002242 (2016).

  36. Desai, S. & Ghose, D. Active learning for improved semi-supervised semantic segmentation in satellite images. Proceedings of the IEEE/CVF winter conference on applications of computer vision (2022).

  37. Mittal, S., Tatarchenko, M. & Brox, T. Semi-supervised semantic segmentation with high-and low-level consistency. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1369–79 (2019).

    Google Scholar 

  38. Zheng, Y. et al. Semi-supervised adversarial semantic segmentation network using transformer and multiscale convolution for high-resolution remote sensing imagery. Remote Sens. 14(8), 1786 (2022).

    Google Scholar 

  39. Wang, Y. et al. Learning pseudo labels for semi-and-weakly supervised semantic segmentation. Pattern Recogn. 132, 108925 (2022).

    Google Scholar 

  40. Tuia, D., Persello, C. & Bruzzone, L. Recent advances in domain adaptation for the classification of remote sensing data. arXiv preprint arXiv:210407778 (2021).

  41. Li, M. et al. Cross-domain and cross-modal knowledge distillation in domain adaptation for 3d semantic segmentation. In Proceedings of the 30th ACM International Conference on Multimedia (2022).

  42. Zhou, W. et al. Graph attention guidance network with knowledge distillation for semantic segmentation of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing https://doi.org/10.1109/TGRS.2023.3311480 (2023).

    Google Scholar 

  43. Yuan, J. et al. FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (2024).

  44. Zhang, C. et al. Multitask GANs for semantic segmentation and depth completion with cycle consistency. IEEE Trans. Neural Netw. Learn. Syst. 32(12), 5404–15 (2021).

    Google Scholar 

  45. Sajjadi, M., Javanmardi, M. & Tasdizen, T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Adv. Neural Inf. Process. Syst. 29 (2016).

  46. Tarvainen, A. & Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Adv. Neural Inf. Process. Syst. 30 (2017).

  47. Yang, L. et al. St++: Make self-training work better for semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2022).

  48. Zhai, X. et al. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE/CVF international conference on computer vision (2019).

  49. Zhou, W. et al. Graph attention guidance network with knowledge distillation for semantic segmentation of remote sensing images. IEEE Trans. Geosci. Remote Sens. 61, 1–15 (2023).

    Google Scholar 

  50. Sohn, K. et al. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Adv. Neural. Inf. Process. Syst. 33, 596–608 (2020).

    Google Scholar 

  51. Zhang, B. et al. Semi-supervised semantic segmentation network via learning consistency for remote sensing land-cover classification. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2, 609–15 (2020).

    Google Scholar 

  52. Li, J. et al. Semisupervised semantic segmentation of remote sensing images with consistency self-training. IEEE Trans. Geosci. Remote Sens. 60, 1–11 (2021).

    Google Scholar 

  53. Chen, C. et al. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In Proceedings of the International workshop on machine learning in medical imaging (Springer, 2018).

  54. Kim, J. et al. Structured consistency loss for semi-supervised semantic segmentation. arXiv preprint arXiv:200104647 (2020).

  55. Zhang, H. et al. Noise-robust consistency regularization for semi-supervised semantic segmentation. Neural Networks 184, 107041 (2025).

    Google Scholar 

  56. Yuan, J. et al. Semi-supervised semantic segmentation with mutual knowledge distillation. In Proceedings of the Proceedings of the 31st ACM international conference on multimedia (2023).

  57. Chen, D., Ma, A. & Zhong, Y. Semi-supervised knowledge distillation framework for global-scale urban man-made object remote sensing mapping. Int. J. Appl. Earth Obs. Geoinf. 122, 103439 (2023).

    Google Scholar 

  58. Ma, W., Karakuş, O. & Rosin, P. L. Knowledge distillation for road detection based on cross-model semi-supervised learning. In Proceedings of the IGARSS 2024–2024 IEEE International Geoscience and Remote Sensing Symposium (IEEE, 2024).

  59. Song, J. et al. RS-MTDF: Multi-Teacher Distillation and Fusion for Remote Sensing Semi-Supervised Semantic Segmentation. arXiv preprint arXiv:250608772 (2025).

  60. Datatang Huawei Cloud Cup 2019 China Internet+ College Student Innovation and Entrepreneurship Competition. Datatang (2023).

  61. Lin, H. et al. A multi-task consistency enhancement network for semantic change detection in HR remote sensing images and application of non-agriculturalization. Remote Sens. 15(21), 5106 (2023).

    Google Scholar 

  62. Zhu, J. & Gao, N. Entropy teacher: Entropy-guided pseudo label mining for semi-supervised small object detection in panoramic dental X-Rays. Electronics 14(13), 2612 (2025).

    Google Scholar 

  63. Yang, Q. et al. Interactive self-training with mean teachers for semi-supervised object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2021).

  64. Shrivastava, A., Gupta, A. & Girshick, R. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016).

  65. Yang, L. et al. Revisiting weak-to-strong consistency in semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2023).

  66. Ouali, Y., Hudelot, C. & Tami, M. Semi-supervised semantic segmentation with cross-consistency training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2020).

  67. Ke, Z. et al. Guided collaborative training for pixel-wise semi-supervised learning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, F, 2020 (Springer, 2020).

  68. Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–605 (2008).

    Google Scholar 

Download references

Funding

This study was supported by the National Natural Science Foundation of China (32260131, 31960119, 62262001), the Yunnan Young and Middle-aged Academic and Technical Leaders Reserve Talent Project in China (202405AC350023, 202205AC160001), and the Scientific Research Fund project of the Education Department of Yunnan Province of China (2025Y1250, 2024Y850).

Author information

Authors and Affiliations

  1. College of Mathematics and Computer Science, Dali University, Dali, 671003, Yunnan, China

    Han-Lin Jiang, Ning Wang, Rong-Hai Wu, Xiao-Wei Li, Mei Zhang & Deng-Qi Yang

  2. College of Agricultural and Biological Sciences, Dali University, Dali, 671003, Yunnan, China

    Guo-Peng Ren

  3. College of Engineering, Dali University, Dali, 671003, Yunnan, China

    En-Ming Zhao

  4. Department of Mathematics and Information Technology, Lijiang Teachers College, Lijiang, 674100, Yunnan, China

    Ben-Hui Chen

  5. China Tower Co., LTD., Dali Branch, Dali, 671003, Yunnan, China

    Bo Geng

  6. Yunnan Hualiang Data Group Co., LTD, Dali, 671003, Yunnan, China

    Zu-Kui Li

Authors
  1. Han-Lin Jiang
    View author publications

    Search author on:PubMed Google Scholar

  2. Ning Wang
    View author publications

    Search author on:PubMed Google Scholar

  3. Bo Geng
    View author publications

    Search author on:PubMed Google Scholar

  4. Zu-Kui Li
    View author publications

    Search author on:PubMed Google Scholar

  5. Rong-Hai Wu
    View author publications

    Search author on:PubMed Google Scholar

  6. Xiao-Wei Li
    View author publications

    Search author on:PubMed Google Scholar

  7. Ben-Hui Chen
    View author publications

    Search author on:PubMed Google Scholar

  8. En-Ming Zhao
    View author publications

    Search author on:PubMed Google Scholar

  9. Guo-Peng Ren
    View author publications

    Search author on:PubMed Google Scholar

  10. Mei Zhang
    View author publications

    Search author on:PubMed Google Scholar

  11. Deng-Qi Yang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

H.-L.J. and N.W. designed the study and wrote the manuscript. B.G. and Z.-K.L. performed the experiments. R.-H.W. analyzed the data. X.-W.L. and M.Z. wrote the manuscript. B.-H.C. and E.-M.Z. prepared figures. G.-P.R. supervised the project. X.-W.L. E.-M.Z, and D.-Q.Y. ​​Funding Acquisition. All authors reviewed and approved the final manuscript.

Corresponding authors

Correspondence to Mei Zhang or Deng-Qi Yang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, HL., Wang, N., Geng, B. et al. A few-shot high-resolution remote sensing image semantic segmentation method. Sci Rep (2026). https://doi.org/10.1038/s41598-026-46887-y

Download citation

  • Received: 03 June 2025

  • Accepted: 27 March 2026

  • Published: 01 April 2026

  • DOI: https://doi.org/10.1038/s41598-026-46887-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Semantic segmentation
  • Multi-stage knowledge distillation
  • Semi-supervised learning
  • UAV remote sensing image
  • Few-shot learning
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing