Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
MHAFNet: multi-stage hybrid attention and adaptive feature fusion network for image restoration
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 09 April 2026

MHAFNet: multi-stage hybrid attention and adaptive feature fusion network for image restoration

  • Jin Huang1,
  • Juntao Shen1,
  • Min Wang2,
  • Yanwu Jing3,4 &
  • …
  • Rui Chen2 

Scientific Reports , Article number:  (2026) Cite this article

  • 185 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Image restoration is a vital research area in computer vision, focusing on reconstructing high-quality clear images from degraded observations. Common types of degradation include noise and blur, which may stem from imaging device limitations, environmental interference, and other factors. This paper centers on the design and optimization of multi-stage image restoration networks, conducting in-depth exploration of feature extraction, feature fusion, attention mechanisms, and their practical applications. A multi-stage hybrid attention mechanism-based image restoration network is proposed. Initially, each stage progressively extracts and restores image features. Then, an adaptive feature fusion block enables effective cross-stage information transfer. Finally, by calculating losses at each stage and assigning different weights, the network achieves stable convergence during training. The hybrid attention mechanism enhances the model’s focus on critical features and improves its understanding of the overall image structure. Outstanding performance has been achieved in both image deblurring and denoising tasks. On the GoPro dataset, the restored results achieved a PSNR of 33.26 and an SSIM of 0.963. On the SIDD dataset, the restored results reached a PSNR of 40.23 and an SSIM of 0.963. Furthermore, ablation experiments demonstrated the effectiveness of the multi-stage model, hybrid attention mechanism, and adaptive feature fusion block.

Similar content being viewed by others

Remote sensing image Super-resolution reconstruction by fusing multi-scale receptive fields and hybrid transformer

Article Open access 16 January 2025

An image deblurring method using improved U-Net model based on multilayer fusion and attention mechanism

Article Open access 04 December 2023

An efficient lightweight network for image denoising using progressive residual and convolutional attention feature fusion

Article Open access 25 April 2024

Data availability

The datasets generated and/or analysed during the current study are available in the public repositories listed below. For image de-blurring: the GoPro blur dataset (3214 images, 1280 \(\times\) 720 px) was downloaded from https://github.com/SeungjunNah/DeepDeblur_release. The originally provided train/test split (2103/1111 images) was adopted. High-resolution images were cropped into 512 \(\times\) 512 px patches to accelerate training and inference. For image de-noising: the Smartphone Image Denoising Dataset (SIDD) was obtained from https://www.eecs.yorku.ca/ kamel/sidd/. It contains 31888 noisy/clean image pairs; we used the standard split (30 608 training and 1 280 validation images) after per-image standardisation. All processed splits that support the findings of this study are included in the above public repositories; no additional restrictions apply.

References

  1. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A. A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2536–2544 (2016).

  2. Bertalmio, M., Sapiro, G., Caselles, V. & Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 417–424 (2000).

  3. Zhang, K., Zuo, W., Gu, S. & Zhang, L. Learning deep cnn denoiser prior for image restoration. IEEE (2017).

  4. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. & Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 8183–8192 (2018).

  5. Yu, J. et al. Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4471–4480 (2019).

  6. Wang, R. & Tao, D. Recent progress in image deblurring. Computer Science (2014).

  7. Lai, W. S., Huang, J. B., Hu, Z., Ahuja, N. & Yang, M. H. A comparative study for single image blind deblurring. In IEEE Conference on Computer Vision and Pattern Recognition (2016).

  8. Koh, J., Lee, J. & Yoon, S. Single-image deblurring with neural networks: A comparative survey. Computer Vision and Image Understanding 203, 103134 (2021).

    Google Scholar 

  9. Eigen, D., Puhrsch, C. & Fergus, R. Depth map prediction from a single image using a multi-scale deep network. MIT Press (2014).

  10. Tao, X. et al. Scale-recurrent network for deep image deblurring. IEEE (2018).

  11. Gao, H., Tao, X., Shen, X. & Jia, J. Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Computer Vision and Pattern Recognition (2019).

  12. Zhang, H., Dai, Y., Li, H. & Koniusz, P. Deep stacked hierarchical multi-patch network for image deblurring. IEEE (2019).

  13. Kupyn, O., Martyniuk, T., Wu, J. & Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. IEEE (2019).

  14. Purohit, K. & Rajagopalan, A. N. Region-adaptive dense network for efficient motion deblurring. 11882–1, 2020 (1889).

    Google Scholar 

  15. Lu, B., Chen, J. C. & Chellappa, R. Unsupervised domain-specific deblurring via disentangled representations. IEEE (2020).

  16. Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. mage Process. 16, 2080–2095 (2007).

    Google Scholar 

  17. Yan, J. et al. Research on multimodal techniques for arc detection in railway systems with limited data. Struct. Health Monit. 14759217251336797 (2025).

  18. Yan, J. et al. Multimodal imitation learning for arc detection in complex railway environments. Instrum. Meas. IEEE Trans. 74, 1–13 (2025).

    Google Scholar 

  19. Zhang, K., Zuo, W. & Zhang, L. Ffdnet: Toward a fast and flexible solution for cnn based image denoising. IEEE Trans. Image Process. 1–1 (2017).

  20. Liang, J. & Liu, R. Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network. In 2015 8th International Congress on Image and Signal Processing (CISP) (2016).

  21. Mao, X., Shen, C. & Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Adv. Neural Inf. Proces. Syst. 29 (2016).

  22. Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26, 3142–3155 (2017).

    Google Scholar 

  23. Zhang, Y. et al. Image super-resolution using very deep residual channel attention networks. Eur. Conf. Comput. Vis. (ECCV) 286–301 (2018).

  24. Zamir, S. W. et al. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739 (2022).

  25. Liu, S., Qi, L., Qin, H., Shi, J. & Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8759–8768 (2018).

  26. Guo, M.-H. et al. Attention mechanisms in computer vision: A survey. Comput. V. Media 8, 331–368 (2022).

    Google Scholar 

  27. Ma, N., Zhang, X., Liu, M. & Sun, J. Activate or not: Learning customized activation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8032–8042 (2021).

  28. Nah, S., Hyun Kim, T. & Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3883–3891 (2017).

  29. Abdelhamed, A., Lin, S. & Brown, M. S. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1692–1700 (2018).

  30. Gonzalez, R. C. & Woods, R. E. Digital Image Processing (Pearson, New York, NY, 2018), 4 edn.

  31. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Google Scholar 

  32. Tao, X., Gao, H., Shen, X., Wang, J. & Jia, J. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8174–8182 (2018).

  33. Zamir, S. W. et al. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14821–14831 (2021).

  34. Chen, L., Lu, X., Zhang, J., Chu, X. & Chen, C. Hinet: Half instance normalization network for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 182–192 (2021).

  35. Fan, C.-M., Liu, T.-J., Liu, K.-H. & Chiu, C.-H. Selective residual m-net for real image denoising. In 2022 30th European Signal Processing Conference (EUSIPCO), 469–473 , (2022) IEEE.

Download references

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62573366), the Meishan Philosophy and Social Science Key Research Base–Sansu Culture Research Center (Grant No. SS24ZD003), and the 2024 Second Batch of Meishan Municipal Guidance Science and Technology Plan Project, Meishan Science and Technology Bureau (Grant No. 2024KJZD169), and the Sichuan Institute of Geological Survey (Grant No. SCIGS-CZDXM-2025009).

Author information

Authors and Affiliations

  1. School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 610000, China

    Jin Huang & Juntao Shen

  2. College of Artificial Intelligence and Electronic Engineering, Sichuan Technology and Business University, Chengdu, 611745, China

    Min Wang & Rui Chen

  3. Surveying and Mapping Geographic Information Center, Sichuan Institute of Geological Survey, Chengdu, 610072, China

    Yanwu Jing

  4. Key Laboratory of Investigation, Monitoring, Protection and Utilization for Cultivated Land Resources, Ministry of Natural Resources, Chengdu, 610045, China

    Yanwu Jing

Authors
  1. Jin Huang
    View author publications

    Search author on:PubMed Google Scholar

  2. Juntao Shen
    View author publications

    Search author on:PubMed Google Scholar

  3. Min Wang
    View author publications

    Search author on:PubMed Google Scholar

  4. Yanwu Jing
    View author publications

    Search author on:PubMed Google Scholar

  5. Rui Chen
    View author publications

    Search author on:PubMed Google Scholar

Contributions

J.H. proposed the project concept, conducted the investigation, developed the methodology, performed algorithm model analysis, handled visualization, and wrote the initial draft of the manuscript. J.T.S. and M.W. contributed to methodology development, algorithm design, experimental data analysis, data visualization, and validation, and reviewed and edited the manuscript. R.C. participated in the investigation, data collection, project administration, visualization, validation, data curation, and reviewed and edited the manuscript

Corresponding author

Correspondence to Rui Chen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, J., Shen, J., Wang, M. et al. MHAFNet: multi-stage hybrid attention and adaptive feature fusion network for image restoration. Sci Rep (2026). https://doi.org/10.1038/s41598-026-47500-y

Download citation

  • Received: 19 November 2025

  • Accepted: 31 March 2026

  • Published: 09 April 2026

  • DOI: https://doi.org/10.1038/s41598-026-47500-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics