Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Research on target detection algorithm for forest fire images based on multi-scale feature extraction
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 09 March 2026

Research on target detection algorithm for forest fire images based on multi-scale feature extraction

  • Weilin Wu1,2,3,
  • Xinpeng Zhou1,2,3,
  • Jincheng Qin1,2,3,
  • Zhanyue Fu1,2,3 &
  • …
  • Kai Xing1,2,3 

Scientific Reports , Article number:  (2026) Cite this article

  • 429 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

To address the challenges of small target flames and target scale variation in forest fire images, a target detection method for forest fire images based on multi-scale feature extraction was studied, with YOLOv9c as the baseline model. Initially, a lightweight feature extraction module named EGI (ECA_Ghost_InceptionV2) was proposed to serve as the backbone feature extraction network, which improved the model’s feature extraction capability and operational efficiency. Second, a P2 small target detection head was introduced; meanwhile, a small target feature fusion module was added to the Neck layer, and the CARAFE upsampling operator was incorporated, enhancing the model’s ability to extract underlying feature information. Finally, to solve the problems of misalignment and scale inconsistency in the traditional IoU loss function, Inner_DIoU was introduced. This enabled the relative relationship between bounding boxes to be described more accurately and improved the precision of target detection. The improved model was validated through experiments on the DFireDataset. Results show that it achieved a detection accuracy of 79.2%, representing a 3.8% improvement compared with the baseline model, while the number of parameters was reduced by 29%, It also maintains a real-time inference speed of over 25 FPS on edge GPUs, enabling deployment in UAV-based forest monitoring systems. In addition, the model exhibits strong robustness to complex natural backgrounds and significantly reduces false alarms compared with existing methods. These findings demonstrate that the proposed model exhibits excellent performance in small target flame detection and is well-suited for the target detection task of forest fire images.

Data availability

The data presented in this study are available upon request from the corresponding author.

References

  1. Zhao, Z. Q., Zheng, P., Xu, S. & Wu, X. Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30, 11, 3212–3232 (2019).

    Google Scholar 

  2. Frizzi, S., Kaabi, R., Bouchouicha, M. & Ginoux, J. M. Convolutional neural network for video fire and smoke detection. in Proceedings of the 42nd Annual Conference of the IEEE Industrial Electronics Society (IECON) 877–882 (IEEE, 2016).

  3. Bhatt, D., Patel, C., Talsania, H. & Patel, J. Cnn variants for computer vision: history, architecture, application, challenges and future scope. Electron 10, 20, 2470 (2021).

    Google Scholar 

  4. Alkhatib, A. A. A. A review on forest fire detection techniques. Int. J. Distrib. Sens. Netw. 10, 3, 597368 (2014).

    Google Scholar 

  5. Niu, S., Zhu, Y. & Wang, J. Small target flame detection algorithm based on improved yolov7. J. Electron. Imaging. 32, 5, 053032–053032 (2023).

    Google Scholar 

  6. Gomaa, A. & Saad, O. M. Residual channel-attention (RCA) network for remote sensing image scene classification. Multimed. Tools Appl. 84, 33837–33861 (2025).

    Google Scholar 

  7. Ma, Y., Huang, Z. & Zhou, W. A lightweight remote sensing small target image detection algorithm based on improved yolov8. Comput. Eng. 51, 9, 350–361 (2025).

    Google Scholar 

  8. Gomaa, A. & Abdalrazik, A. Novel deep learning domain adaptation approach for object detection using semi-self building dataset and modified yolov4. World Electr. Veh. J. 15, 255 (2024).

    Google Scholar 

  9. Xu, R. J., Xie, H., Jiang, W. J., Li, H. B. & Xiao, Y. Lightweight early forest fire detection algorithm fusing multi-scale attention. Electron Meas. Technol.48(15), 80-90(2025).

  10. Xue, Z., Lin, H. & Wang, F. A small target forest fire detection model based on yolov5 improvement. Forests 13, 8, 1332 (2022).

    Google Scholar 

  11. Woo, S., Park, J., Lee, J. Y. & Kweon, I. S. CBAM: Convolutional block attention module. in Proceedings of the European Conference on Computer Vision (ECCV) 3–19 (2018).

  12. Tan, M., Pang, R. & Le, Q. V. Efficientdet: Scalable and efficient object detection. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 10781–10790 (2020).

  13. Liu, W., Shen, Z. & Xu, S. CF-YOLO: A capable forest fire identification algorithm founded on yolov7 improvement. Signal. Image Video Process. 18, 1–11 (2024).

    Google Scholar 

  14. Wang, C. Y., Bochkovskiy, A. & Liao, H. Y. M. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 7464–7475 (2023).

  15. Hou, Q., Zhou, D. & Feng, J. Coordinate attention for efficient mobile network design. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 13713–13722 (2021).

  16. Hou, Q., Zhou, D. & Feng, J. Coordinate attention for efficient mobile network design. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 13713–13722 (2021).

  17. Hassan, O. F., Ibrahim, A. F., Gomaa, A., Makhlouf, M. A. & Hafiz, B. Real-time driver drowsiness detection using transformer architectures: A novel deep learning approach. Sci. Rep. 15, 17493 (2025).

    Google Scholar 

  18. Shen, J. et al. Lightweight semantic feature extraction model with direction awareness for aerial traffic object detection. IEEE Trans. Intell. Transp. Syst. https://doi.org/10.1109/TITS.2025.3642410 (2025).

    Google Scholar 

  19. Shen, J. et al. An anchor-free lightweight deep convolutional network for vehicle detection in aerial images. IEEE Trans. Intell. Transp. Syst. 23, 12, 24330–24342. https://doi.org/10.1109/TITS.2022.3203715 (2022).

    Google Scholar 

  20. Shen, J., Liu, N., Sun, H., Li, D. & Zhang, Y. An instrument indication acquisition algorithm based on lightweight deep convolutional neural network and hybrid attention fine-grained features. IEEE Trans. Instrum. Meas. 73, 1–16. https://doi.org/10.1109/TIM.2023.3346488 (2024). Art. 5008516.

    Google Scholar 

  21. Shen, J. et al. Finger vein recognition algorithm based on lightweight deep convolutional neural network. IEEE Trans. Instrum. Meas. 71, 1–13. https://doi.org/10.1109/TIM.2021.3132332 (2022). Art. 5000413.

    Google Scholar 

  22. Abdelmaaboud, Ahmed, et al. AI-driven versus traditional ionospheric modeling approaches for GNSS positioning in Egypt. Journal of Applied Geodesy. 0 (2025).

  23. Artificial neural network–based modeling and prediction of GNSS ionospheric errors in Egypt.

  24. Abdalrazik, Ahmad, Ahmed Gomaa, and Asmaa Afifi. Multiband circularly-polarized stacked elliptical patch antenna with eye-shaped slot for GNSS applications. International Journal of Microwave and Wireless Technologies. 16(7), 1229-1235 (2024).

  25. Abdalrazik, Ahmad, Ahmed Gomaa, and Ahmed A. Kishk. A wide axial-ratio beamwidth circularly-polarized oval patch antenna with sunlight-shaped slots for gnss and wimax applications. Wireless Networks . 28(8), 3779-3786 (2022).

  26. Gomaa, Ahmed, Asmaa Afifi, and Ahmad Abdalrazik. A dual-band wide axial-ratio beamwidth circularly-polarized antenna with v-shaped slot for l2/l5 gnss applications. Novel Intelligent and Leading Emerging Sciences Conference (NILES). IEEE, (2024).

  27. Wang, C. Y., Yeh, I. H. & Liao, H. Y. M. Yolov9: Learning what you want to learn using programmable gradient information. Preprint at (2024). https://arxiv.org/abs/2402.13616.

  28. Zhu, H., Li, Y., Wu, Y. & Wang, J. CARAFE: Content-aware reassembly of features for high-quality feature upsampling. in Proceedings of the IEEE/CVF International Conference on Computer Vision 9915–9924 (2019).

  29. Zhang, H., Xu, C. & Zhang, S. Inner-IoU: More effective intersection over union loss with auxiliary bounding box. Preprint at (2023). https://arxiv.org/abs/2311.02877.

  30. Han, K., Wang, Y., Tian, Q. & Guo, J. Ghostnet: More features from cheap operations. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 1580–1589 (2020).

  31. Wang, Q., Wu, B., Zhu, P. & Li, P. ECA-Net: efficient channel attention for deep convolutional neural networks. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 12911–12920 (2020).

  32. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Preprint at. (2015). https://arxiv.org/abs/1502.03167.

  33. Terven, J., Córdova-Esparza, D. M. & Romero-González, J. A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extract. 5, 4, 1680–1716 (2023).

    Google Scholar 

  34. Zheng, Z., Wang, P., Liu, W. & Li, H. Distance-IoU loss: faster and better learning for bounding box regression. in Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34(07) 12993–13000 (2020).

  35. de Venâncio, P. V. A. B., Lisboa, A. C. & Barbosa, A. V. An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices. Neural Comput. Appl. 34, 18, 15349–15368 (2022).

    Google Scholar 

Download references

Funding

This research was funded by the Guangxi Key Research and Development Program Grant No. FN2504240010, by the Guangxi Zhuang Autonomous Region Youth Talent Project under Grant 301780227, by the Guangxi Basic Ability Improvement Project for Young and Middle-aged Teachers under Grant 2025KY0213, and by the Guangxi Minzu University Xiangsi Lake Youth Scholar Innovation Team under Grant 2023GXUNXSHQN06.

Author information

Authors and Affiliations

  1. The Center for Applied Mathematics of Guangxi, School of Physics and Electronic Information, Guangxi Minzu University, Nanning, 530006, China

    Weilin Wu, Xinpeng Zhou, Jincheng Qin, Zhanyue Fu & Kai Xing

  2. Guangxi Key Laboratory of ZHIYU Humanoid Robots, Nanning, 530006, China

    Weilin Wu, Xinpeng Zhou, Jincheng Qin, Zhanyue Fu & Kai Xing

  3. Engineering Research Center of Multi-modal Information Intelligent Sensing, Processing and Application, Guangxi Minzu University, Nanning, 530006, China

    Weilin Wu, Xinpeng Zhou, Jincheng Qin, Zhanyue Fu & Kai Xing

Authors
  1. Weilin Wu
    View author publications

    Search author on:PubMed Google Scholar

  2. Xinpeng Zhou
    View author publications

    Search author on:PubMed Google Scholar

  3. Jincheng Qin
    View author publications

    Search author on:PubMed Google Scholar

  4. Zhanyue Fu
    View author publications

    Search author on:PubMed Google Scholar

  5. Kai Xing
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Conceptualization, Z.X.; methodology, Z.X.; software, Z.X.; validation, Z.X., W.W., Q.J., F.Z. and X.K.; formal analysis, Z.X.; investigation, Z.X.; resources, Z.X.; data curation, Z.X., Q.J. and F.Z.; writing—original draft preparation, Z.X.; writing—review and editing, W.W.; visualization, F.Z.; supervision, W.W.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Jincheng Qin.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, W., Zhou, X., Qin, J. et al. Research on target detection algorithm for forest fire images based on multi-scale feature extraction. Sci Rep (2026). https://doi.org/10.1038/s41598-026-41994-2

Download citation

  • Received: 20 November 2025

  • Accepted: 24 February 2026

  • Published: 09 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-41994-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Deep learning
  • Object detection
  • Forest fire image
  • Multi-scale feature extraction
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics