Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
RFGLNet for adverse weather domain-generalized semantic segmentation with frequency low-rank enhancement
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 11 February 2026

RFGLNet for adverse weather domain-generalized semantic segmentation with frequency low-rank enhancement

  • Xin Ye1,
  • Xiaoqi Shi1 &
  • Yuxue Li2 

Scientific Reports , Article number:  (2026) Cite this article

  • 723 Accesses

  • 1 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Semantic segmentation in adverse weather conditions presents significant challenges due to insufficient image brightness, excessive noise, and blurred object boundaries, which hinder the performance of traditional visual recognition methods. Domain generalization (DG) for semantic segmentation aims to leverage data from normal illumination domains to ensure robust model performance in unseen adverse weather domains—a critical requirement for autonomous driving robots. Recent advancements in parameter-efficient fine-tuning via frozen vision foundation models offer new avenues for DGs. However, conventional domain-generalized semantic segmentation methods often struggle with severe weather conditions, particularly in capturing object details and global structures. To overcome these limitations, we introduce RFGLNet, a domain-generalized semantic segmentation model designed for adverse weather scenarios. RFGLNet enhances segmentation accuracy by incorporating an SVD-Initialized Low-Rank Module, a Fourier-Enhanced Channel Attention Module, and a Grouped Modeling Spatial Attention Module. By leveraging frequency-domain information through Fourier transforms, RFGLNet improves global structural perception, facilitating a holistic understanding of complex scenarios. Additionally, the decompositional modeling spatial attention mechanism reduces cross-channel interference, enhancing local detail extraction. Using singular value decomposition for parameter fine-tuning ensures precise and rapid alignment with pretrained feature distributions. Our experiments show that RFGLNet achieves a mean intersection over union of 78.3% on the ACDC adverse weather test dataset, with only 4.32 M trainable parameters.

Similar content being viewed by others

Unsupervised domain adaptation for remote sensing semantic segmentation with the 2D discrete wavelet transform

Article Open access 09 October 2024

A deep learning method for optimizing semantic segmentation accuracy of remote sensing images based on improved UNet

Article Open access 10 May 2023

Enhanced hybrid CNN and transformer network for remote sensing image change detection

Article Open access 24 March 2025

Data availability

The datasets used and/or analysed during the current study are available in the (Cityscapes official repository), (https://www.cityscapes-dataset.com/) and in the (ACDC official repository), (https://acdc.vision.ee.ethz.ch/).

References

  1. Schalfuss, J., Müller, T., & Geiger, A. Distracting downpour: Adversarial weather attacks for motion estimation. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 10072–10082. https://doi.org/10.1109/ICCV51070.2023.00927. (2023).

  2. Wang, Y., Chen, X., & Zhang, L. Deep degradation prior for low-quality image classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 11046–11055. https://doi.org/10.1109/CVPR42600.2020.01106. (2020).

  3. Zamir, S. W., Arora, A., Khan, S. H., Hayat, M., Khan, F. S., & Yang, M. H. Multistage progressive image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 14816–14826. https://doi.org/10.1109/CVPR46437.2021.01458. (2021).

  4. Zamir, S. W., Afifi, M., Khan, S. H., Hayat, M., Khan, F. S., Yang, M. H., & Shao, L. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 5718–5729. https://doi.org/10.1109/CVPR52688.2022.00564. (2022).

  5. Qian, C., Rezaei, M., Anwar, S., Li, W. J., Hussain, T., Azarmi, M., & Wang, W. AllWeather-Net: Unified image enhancement for autonomous driving under adverse weather and low-light conditions. In Proceedings of the 2024 international conference on pattern recognition (ICPR), Lecture notes in computer science, vol. 15330, 151–166. https://doi.org/10.1007/978-3-031-78113-1_11. (2024).

  6. Yang, Y. C., & Soatto, S. FDA: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 4084–4094. https://doi.org/10.1109/CVPR42600.2020.00414. (2020).

  7. Li, Y., Chang, Y., Yu, C. & Yan, L. Close the loop: A unified bottom-up and top-down paradigm for joint image deraining and segmentation. Poceed. AAAI Conf. Artif. Intell. 36(2), 1438–1446. https://doi.org/10.1609/aaai.v36i2.20033 (2022).

    Google Scholar 

  8. Guo, X., Liu, Y., Xue, W., Zhang, Z. & Zhuang, Y. Low-Light Enhancement and Global-Local Feature Interaction for RGB-T Semantic Segmentation. IEEE Trans. Instrum. Meas. 74, 1–13. https://doi.org/10.1109/TIM.2025.3545511 (2025).

    Google Scholar 

  9. Lu, Z., Wang, H. B., Wang, M. Y. & Wang, Z. W. Improved dark channel priori single image defogging technique using image segmentation and joint filtering. Sci. Prog. 107(1), 1–31. https://doi.org/10.1177/00368504231221407 (2024).

    Google Scholar 

  10. Bi, L., Zhang, W., Zhang, X. & Li, C. A nighttime driving-scene segmentation method based on light-enhanced network. World Electr. Veh. J. 15, 490. https://doi.org/10.3390/wevj15110490 (2024).

    Google Scholar 

  11. Lin, G., Milan, A., Shen, C., & Reid, I. RefineNet: Multipath refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 5168–5177. https://doi.org/10.1109/CVPR.2017.549. (2017).

  12. He, K., Zhang, X., Ren, S., & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 770–778, https://doi.org/10.1109/CVPR.2016.90. (2016).

  13. Simonyan, K., & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations (ICLR), (2015).

  14. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2261–2269. https://doi.org/10.1109/CVPR.2017.243. (2017).

  15. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW), https://doi.org/10.48550/arXiv.1704.04861. (2017).

  16. Wei, Z. X., Chen, L., Jin, Y., Ma, X. X., Liu, T. L., Ling, P. Y., Wang, B., Chen, H. A., & Zheng, J. J. Stronger, Fewer, & Superior: Harnessing vision foundation models for domain generalized semantic segmentation. In Conference on computer vision and pattern recognition (CVPR), 28619–28630. https://doi.org/10.1109/CVPR52733.2024.02704. (2023).

  17. An, J. et al. Unpaired image to image translation for source free domain adaptation in semantic segmentation. Sci Rep 15, 23318. https://doi.org/10.1038/s41598-025-05648-z (2025).

    Google Scholar 

  18. A. Gomaa. Advanced Domain Adaptation Technique for Object Detection Leveraging Semi-Automated Dataset Construction and Enhanced YOLOv8. In Proceedings of the 6th novel intelligent and leading emerging sciences conference (NILES 2024). IEEE, 211–214, https://doi.org/10.1109/NILES63360.2024.10753164. (2024).

  19. Gomaa, A. & Abdalrazik, A. Novel deep learning domain adaptation approach for object detection using semi-self building dataset and modified YOLOv4. World Electric Vehicle J. 15(6), 255. https://doi.org/10.3390/wevj15060255 (2024).

    Google Scholar 

  20. Sakaridis, C., Wang, H., Li, K., Zurbrügg, R., Jadon, A., Abbeloos, W., Olmeda Reino, D., Van Gool, L., & Dai, D. X. ACDC: The adverse conditions dataset with correspondences for robust semantic driving scene perception. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 10745–10755. https://doi.org/10.1109/ICCV48922.2021.01059. (2021).

  21. Oquab, M., Darcet, T., Moutakanni, T., Vo, H. V., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., Assran, M., Ballas, N., Galuba, W., Howes, R., Huang, P.-Y., Li, S.-W., Misra, I., Rabbat, M., Sharma, V., Synnaeve, G., Xu, H., Jegou, H., Mairal, J., Labatut, P., Joulin, A., & Bojanowski, P. DINOv2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023). https://doi.org/10.48550/arXiv.2304.07193.

  22. Hu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 42(8), 2011–2023. https://doi.org/10.1109/TPAMI.2019.2913372 (2018).

    Google Scholar 

  23. Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. CBAM: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), 3–19. https://doi.org/10.1007/978-3-030-01234-2_1. (2018).

  24. Cheng, B., Misra, I., Schwing, A. G., Kirillov, A., & Girdhar, R. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 1280–1289. https://doi.org/10.1109/CVPR52688.2022.00135. (2022).

  25. Long, J., Shelhamer, E., & Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 3431–3440 https://doi.org/10.1109/CVPR.2015.7298965. (2015).

  26. Cordts, M., Enzweiler, M., Omran, M., Benenson, R., Ramos, S., Franke, U., Rehfeld, T., Roth, S., & Schiele, B. The Cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 3213–3223. https://doi.org/10.1109/CVPR.2016.350. (2016).

  27. Qin, Z., Zhang, P., Wu, F., & Li, X. FcaNet: Frequency Channel Attention Networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 763–772. https://doi.org/10.1109/ICCV48922.2021.00082. (2021).

  28. Ding, J., Xue, N., Xia, G. S., Schiele, B., & Dai, D. X. HGFormer: Hierarchical grouping transformer for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 15413–15423. https://doi.org/10.1109/CVPR52729.2023.01479. (2023).

  29. Sun, C. W., Wei, J. W., Wu, Y. J., Shi, Y. M., He, S. Y., Ma, Z. Y., Xie, N., & Yang, Y. (2024). SVFit: Parameter-efficient fine-tuning of large pretrained models using singular values. arXiv preprint arXiv:2409.05926. https://doi.org/10.48550/arXiv.2409.05926.

  30. Peng, D., Lei, Y. J., Hayat, M., Guo, Y. L., & Li, W. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2584–2595 (2022). https://doi.org/10.1109/CVPR52688.2022.00262. (2022).

  31. Yang, X., Yan, W., Yuan, Y., Mi, M. B., & Tan, R. T. Semantic segmentation in multiple adverse weather conditions with domain knowledge retention. arXiv preprint arXiv:2401.07459, https://doi.org/10.48550/arXiv.2401.07459. (2024).

  32. Li, J., Zhang, H., Liu, Y., & Tang, X. UniMix: Toward domain adaptive and generalizable LiDAR semantic segmentation in adverse weather. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 14781–14791. https://doi.org/10.1109/CVPR52733.2024.01400. (2024).

  33. Yang, G. FreqCross: A multi-modal frequency-spatial fusion network for robust detection of stable diffusion 3.5 generated images. computer vision and pattern recognition. arXiv preprint arXiv:2507.02995. https://doi.org/10.48550/arXiv.2507.02995

  34. Yi, J.J., Bi, Q., Zheng, H., Zhan, H.L., Ji, W., Huang, Y.W., Li, Y.X., Zheng, Y.F. Learning Spectral-Decomposited tokens for domain generalized semantic segmentation. In ACMMM, 8159–8168. https://doi.org/10.1145/3664647.3680906. (2024).

  35. Li Z Y, Lu J H, Deng J C, et al. SAS: Segment any 3d scene with integrated 2D priors [EB/OL]. arXiv preprint arXiv:2503.08512 (2025). https://doi.org/10.48550/arXiv.2503.08512.

  36. Loshchilov, I., & Hutter, F. Decoupled weight decay regularization. In Proceedings of the 7th international conference on learning representations (ICLR). (2019).

  37. Hoyer, L., Dai, D. X., & Van Gool, L. DAFormer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 9914–9925. https://doi.org/10.1109/CVPR52688.2022.00969. (2022).

  38. Loshchilov, I., & Hutter, F. SGDR: Stochastic gradient descent with warm restarts. In Proceedings of the international conference on learning representations (ICLR). (2017).

  39. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. LoRA: Low-rank adaptation of large language models. In The Tenth international conference on learning representations (ICLR). arXiv preprint arXiv:2106.09685. https://doi.org/10.48550/arXiv.2106.09685. (2022).

  40. Li, X., & Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), 4582–4597. https://doi.org/10.18653/v1/2021.acl-long.353. (2021).

  41. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., & Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 11531–11539. https://doi.org/10.1109/CVPR42600.2020.01155. (2020).

  42. Wang, Y., Li, Y. S., Wang, G., & Liu, X. G. Multi-scale Attention Network for Single Image Super-Resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), 5950–5960, https://doi.org/10.1109/CVPRW63382.2024.00602. (2024).

  43. Alzanin, S. Explainable artificial intelligence with temporal convolutional networks for adverse weather condition detection in driverless vehicles. Sci. Rep. 15, 19475. https://doi.org/10.1038/s41598-025-05136-4 (2025).

    Google Scholar 

  44. Wang, J. et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43(10), 3349–3364. https://doi.org/10.1109/TPAMI.2020.2983686 (2021).

    Google Scholar 

  45. Bruggemann, D., Sakaridis, C., Truong, P., & Van Gool, L. Refign: Align and refine for adaptation of semantic segmentation to adverse conditions. In IEEE workshop on applications of computer vision (WACV), 3173–3183. https://doi.org/10.1109/WACV56688.2023.00319. (2023).

  46. Hoyer, L., Dai, D. X. & Van Gool, L. HRDA: Context-aware high-resolution domain-adaptive semantic segmentation. Comput. Vision ECCV 2022, 372–391. https://doi.org/10.1007/978-3-031-20056-4_22 (2022).

    Google Scholar 

  47. Sakaridis, C., Dai, D. X., & Van Gool, L. Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In IEEE transactions on pattern analysis and machine intelligence, 3139–3153. https://doi.org/10.1109/TPAMI.2020.3045882. (2020).

  48. Xie, E., Wang, W. H., Yu, Z. D., Anandkumar, A., Alvarez, J. M., & Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. In Proceedings of the European conference on computer vision and pattern recognition, arXiv:2105.15203. https://doi.org/10.48550/arXiv.2105.15203. (2021).

  49. Chakraborty, P. et al. IndiVNet A region adaptive semantic image segmentation for autonomous driving in unstructured environments. Sci. Rep. https://doi.org/10.1038/s41598-025-32305-2 (2025).

    Google Scholar 

  50. Bi, Q., Yi, J.J., Zheng, H., Zhan, H.L., Huang, Y.W., Ji, W., Li, Y.X., Zheng, Y.F. Learning frequency-adapted vision foundation model for domain generalized semantic segmentation. InNeurIPS (2024).

  51. S. Yun, S. Chae, D. Lee and Y. Ro, “SoMA: Singular value decomposed minor components adaptation for domain generalizable representation learning. In 2025 IEEE/CVF conference on computer vision and pattern recognition (CVPR), Nashville, TN, USA, 25602–25612, https://doi.org/10.1109/CVPR52734.2025.02384. (2025).

Download references

Funding

No funding was received for this paper.

Author information

Authors and Affiliations

  1. Xi’an Technological University, Xi’an, China

    Xin Ye & Xiaoqi Shi

  2. Chaoyue Technology Co., Ltd., Shandong, China

    Yuxue Li

Authors
  1. Xin Ye
    View author publications

    Search author on:PubMed Google Scholar

  2. Xiaoqi Shi
    View author publications

    Search author on:PubMed Google Scholar

  3. Yuxue Li
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Introduction: Xin Ye and Xiaoqi Shi; methodology: Xin Ye and Xiaoqi Shi; software: Xiaoqi Shi; validation: Yuxue Li; writing—original draft preparation: Xin Ye and Xiaoqi Shi; writing—review and editing: Yuxue Li. All the authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Xiaoqi Shi.

Ethics declarations

Competing Interests

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ye, X., Shi, X. & Li, Y. RFGLNet for adverse weather domain-generalized semantic segmentation with frequency low-rank enhancement. Sci Rep (2026). https://doi.org/10.1038/s41598-026-39052-y

Download citation

  • Received: 24 December 2025

  • Accepted: 02 February 2026

  • Published: 11 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-39052-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Semantic segmentation
  • Autonomous driving robots
  • Domain generalization
  • Adverse weather environment
  • Attention mechanism
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics