Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
A foundation model for multi-task cross-distribution restoration of fluorescence microscopy images
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 10 March 2026

A foundation model for multi-task cross-distribution restoration of fluorescence microscopy images

  • Qiqi Lu  ORCID: orcid.org/0000-0001-6066-06901,2,3,
  • Xiuli Liu  ORCID: orcid.org/0000-0001-6663-16474,
  • Qianjin Feng  ORCID: orcid.org/0000-0002-3830-51201,2,3,
  • Shaoqun Zeng  ORCID: orcid.org/0000-0002-1802-337X4 &
  • …
  • Shenghua Cheng  ORCID: orcid.org/0000-0003-3527-38451,2,3 

Nature Communications , Article number:  (2026) Cite this article

  • 4885 Accesses

  • 1 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Fluorescence imaging
  • Image processing
  • Machine learning

Abstract

Deep learning has demonstrated remarkable abilities in restoring fluorescence microscopy images degraded by noise, blur, or undersampling. However, most existing models are task-specific and trained on limited, homogeneously distributed data, which restricts their generalizability and practicality. Here, we present FluoResFM, a foundation model for multi-task and cross-distribution fluorescence microscopy image restoration in a unified model. FluoResFM leverages textual prior information to adapt to specific tasks and data distributions. Trained on datasets across three tasks (image denoising, deconvolution, and super-resolution) and over 20 biological structures, FluoResFM demonstrates superior restoration performance and enhanced generalization across datasets with varied biological structures and imaging conditions. Through fine-tuning with only a single sample, FluoResFM can further improve its performance on unseen data, achieving results comparable to conventional models trained on hundreds of samples, and be easily adapted to additional tasks, including 3D image restoration, surface projection, isotropic reconstruction, and super-resolution with various scale factors. Moreover, the performance of existing cell/organelle segmentation models can be enhanced using the high-quality images restored by FluoResFM.

Similar content being viewed by others

Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration

Article 12 April 2024

Multifocal fluorescence video-rate imaging of centimetre-wide arbitrarily shaped brain surfaces at micrometric resolution

Article Open access 06 December 2023

Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet

Article Open access 25 November 2021

Data availability

All the datasets used in this study are derived from existing literature and are publicly accessible through the corresponding links provided in Supplementary Table 1-4. Example data for training and testing are available on Zenodo repository (https://doi.org/10.5281/zenodo.18382702). Other raw data (e.g., image raw data underlying the figures and intermediate data for deep learning) are available from the corresponding author upon request due to their large file size. Source data are provided with this paper.

Code availability

The source codes are available via GitHub repository (https://github.com/qiqi-lu/fluoresfm) and Zenodo repository64 (https://doi.org/10.5281/zenodo.18383925). The napari plugin code is publicly accessible via GitHub at https://github.com/qiqi-lu/napari-fluoresfm. The guidance video on using the napari plugin is available at https://www.bilibili.com/video/BV16JeFzuEof. The pre-trained text and image encoder from BiomedCLIP and the pre-trained FluoResFM model are publicly accessible on Zenodo repository (https://doi.org/10.5281/zenodo.18382702).

References

  1. Schermelleh, L. et al. Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72–84 (2019).

    Google Scholar 

  2. Scherf, N. & Huisken, J. The smart and gentle microscope. Nat. Biotechnol. 33, 815–818 (2015).

    Google Scholar 

  3. Goodwin, P. C. Quantitative deconvolution microscopy. in Methods in Cell Biology vol. 123 177–192 (Elsevier, 2014).

  4. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Google Scholar 

  5. Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021).

    Google Scholar 

  6. Li, Y. et al. Incorporating the image formation process into deep learning improves network performance. Nat. Methods 19, 1427–1437 (2022).

    Google Scholar 

  7. Qiao, C. et al. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat. Commun. 15, 4180 (2024).

    Google Scholar 

  8. Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).

    Google Scholar 

  9. Chen, R. et al. Single-frame deep-learning super-resolution microscopy for intracellular dynamics imaging. Nat. Commun. 14, 2854 (2023).

    Google Scholar 

  10. Pawley, J. B. Handbook of Biological Confocal Microscopy. Springer: New York, NY, 2006.

    Google Scholar 

  11. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019).

    Google Scholar 

  12. Kondepudi, A. et al. Foundation models for fast, label-free detection of glioma infiltration. Nature 637, 439–445 (2025).

    Google Scholar 

  13. Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    Google Scholar 

  14. Wang, X. et al. A pathology foundation model for cancer diagnosis and prognosis prediction. Nature 634, 970–978 (2024).

    Google Scholar 

  15. Xiang, J. et al. A vision–language foundation model for precision oncology. Nature 638, 769–778 (2025).

    Google Scholar 

  16. Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181–188 (2024).

    Google Scholar 

  17. Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).

    Google Scholar 

  18. Sun, Y., Wang, L., Li, G., Lin, W. & Wang, L. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat. Biomed. Eng. 9, 521–538 (2025).

    Google Scholar 

  19. Ma, C., Tan, W., He, R. & Yan, B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat. Methods 21, 1558–1567 (2024).

    Google Scholar 

  20. Jiao, J. et al. USFM: A universal ultrasound foundation model generalized to tasks and organs towards label efficient image analysis. Med. Image Anal. 96, 103202 (2024).

    Google Scholar 

  21. Yan, Q. et al. Textual prompt guided image restoration. Eng. Appl. Artif. Intell. 155, 110981 (2025).

  22. Bai, Y. et al. TextIR: A Simple Framework for Text-based Editable Image Restoration. IEEE Trans. Vis. Comput. Graph. 1–16 (2025) https://doi.org/10.1109/TVCG.2025.3550844.

  23. Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. M. & Frangi, A. F.) vol. 9351 234–241 (Springer International Publishing, 2015).

  24. Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 10674–10685 (IEEE, 2022). https://doi.org/10.1109/CVPR52688.2022.01042.

  25. Vaswani, A. et al. Attention is All you Need. In Advances in Neural Information Processing Systems (eds Guyon, I. et al.). Vol. 30 (Curran Associates, Inc., 2017).

  26. Zhang, S. et al. A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs. NEJM AI 2, (2025).

  27. Reed, S. et al. Generative Adversarial Text to Image Synthesis. In Proceedings of The 33rd International Conference on Machine Learning 1060–1069 (PMLR, 2016).

  28. Bai, Y. et al. TextIR: A Simple Framework for Text-based Editable Image Restoration. IEEE Trans. Visual. Comput. Graphics 1–16 (2025) https://doi.org/10.1109/TVCG.2025.3550844.

  29. Lu, Y. et al. Priors in Deep Image Restoration and Enhancement: A Survey. Preprint at https://doi.org/10.48550/arXiv.2206.02070 (2023).

  30. Theodoris, C. V. et al. Transfer learning enables predictions in network biology. Nature 618, 616–624 (2023).

    Google Scholar 

  31. Liao, T. et al. A super-resolution strategy for mass spectrometry imaging via transfer learning. Nat. Mach. Intell. 5, 656–668 (2023).

    Google Scholar 

  32. Boutros, M., Heigwer, F. & Laufer, C. Microscopy-based high-content screening. Cell 163, 1314–1325 (2015).

    Google Scholar 

  33. Pepperkok, R. & Ellenberg, J. High-throughput fluorescence microscopy for systems biology. Nat. Rev. Mol. Cell Biol. 7, 690–696 (2006).

    Google Scholar 

  34. Pachitariu, M., Rariden, M. & Stringer, C. Cellpose-SAM: superhuman generalization for cellular segmentation. Preprint at https://doi.org/10.1101/2025.04.28.651001 (2025).

  35. Lefebvre, A. E. Y. T. et al. Nellie: automated organelle segmentation, tracking and hierarchical feature extraction in 2D/3D live-cell microscopy. Nat. Methods 22, 751–763 (2025).

    Google Scholar 

  36. Sofroniew, N. et al. Napari: a multi-dimensional image viewer for Python. Zenodo https://doi.org/10.5281/zenodo.12722145 (2024).

  37. Guo, M. et al. Deep learning-based aberration compensation improves contrast and resolution in fluorescence microscopy. Nat. Commun. 16, 313 (2025).

    Google Scholar 

  38. Verveer, P. J. Advanced Fluorescence Microscopy: Methods and Protocols. vol. 1251 (Springer New York, New York, NY, 2015).

  39. Esser, P. et al. Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. In Proceedings of the 41st International Conference on Machine Learning 12606–12633 (PMLR, 2024).

  40. Video generation models as world simulators. https://openai.com/index/video-generation-models-as-world-simulators/ (2024).

  41. Liang, J. et al. SwinIR: Image Restoration Using Swin Transformer. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 1833–1844 (IEEE, Montreal, BC, Canada, 2021). doi:10.1109/ICCVW54120.2021.00210.

  42. Zamir, S. W. et al. Restormer: Efficient Transformer for High-Resolution Image Restoration. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 5718–5729 (IEEE, 2022). https://doi.org/10.1109/CVPR52688.2022.00564.

  43. Tay, Y., Dehghani, M., Bahri, D. & Metzler, D. Efficient Transformers: A Survey. ACM Comput. Surv. 55, 1–28 (2023).

  44. Zhou, R. et al. W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping. in Computer Vision–ECCV 2020 Workshops (eds Bartoli, A. & Fusiello, A.) vol. 12535 474–491 (Springer International Publishing, 2020).

  45. Zhang, Y. et al. A Poisson-Gaussian Denoising Dataset With Real Fluorescence Microscopy Images. in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 11702–11710 (IEEE, 2019). https://doi.org/10.1109/CVPR.2019.01198.

  46. Yao, K. et al. Scaffold-A549: a benchmark 3D fluorescence image dataset for unsupervised nuclei segmentation. Cogn. Comput. 13, 1603–1608 (2021).

    Google Scholar 

  47. Spahn, C. et al. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun. Biol. 5, 688 (2022).

    Google Scholar 

  48. Belharbi, S. et al. SR-CACO-2: a dataset for confocal fluorescence microscopy image super-resolution. In Proceedings of the 38th International Conference on Neural Information Processing Systems vol. 37, 59948–59983 (Curran Associates Inc., Red Hook, NY, USA, 2024).

  49. Markwirth, A. et al. Video-rate multi-color structured illumination microscopy with simultaneous real-time reconstruction. Nat. Commun. 10, 4315 (2019).

    Google Scholar 

  50. Svoboda, D., Homola, O. & Stejskal, S. Generation of 3D Digital Phantoms of Colon Tissue. In Image Analysis and Recognition (eds Kamel, M. & Campilho, A.) vol. 6754, 31–39 (Springer Berlin Heidelberg, 2011).

  51. Svoboda, D., Kozubek, M. & Stejskal, S. Generation of digital phantoms of cell nuclei and simulation of image formation in 3D image cytometry. Cytometry A. 75A, 494–509 (2009).

    Google Scholar 

  52. Loshchilov, I. & Hutter, F. Decoupled Weight Decay Regularization. in International Conference on Learning Representations (Open Review.net, 2019).

  53. Van Der Walt, S. et al. scikit-image: image processing in Python. PeerJ. 2, e453 (2014).

    Google Scholar 

  54. Wang, Z., Simoncelli, E. P. & Bovik, A. C. Multiscale structural similarity for image quality assessment. in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 1398–1402 (IEEE, 2003). https://doi.org/10.1109/ACSSC.2003.1292216.

  55. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Google Scholar 

  56. Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337–1346 (2020).

    Google Scholar 

  57. Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).

    Google Scholar 

  58. Fang, L. et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 18, 406–416 (2021).

    Google Scholar 

  59. Saraiva, B. M. et al. Efficiently accelerated bioimage analysis with NanoPyx, a Liquid Engine-powered Python framework. Nat. Methods 22, 283–286 (2025).

    Google Scholar 

  60. Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019).

    Google Scholar 

  61. Fang, S. et al. Resolution assessment of super-resolution microscopy imaging: structural and technical dependencies for cell biology. Cytotechnology 77, 170 (2025).

    Google Scholar 

  62. Stringer, C. & Pachitariu, M. Cellpose3: one-click image restoration for improved cellular segmentation. Nat. Methods 22, 592–599 (2025).

    Google Scholar 

  63. Zhang, Z., Nishimura, Y. & Kanchanawong, P. Extracting microtubule networks from superresolution single-molecule localization microscopy data. Mol. Biol. Cell 28, 333–345 (2017).

    Google Scholar 

  64. Lu, Q., Liu, X., Feng, Q., Zeng, S. & Cheng, S. A foundation model for multi-task cross-distribution restoration of fluorescence microscopy image. Zenodo https://doi.org/10.5281/zenodo.18383925 (2026).

Download references

Acknowledgements

This work is supported by grants from the National Natural Science Foundation of China (grant 62471212 and 62201221 to S.C.), Guangdong Basic and Applied Basic Research Foundation (2025A1515011333 to S.C.), Science and Technology Projects in Guangzhou (grant 2024A04J4960 to S.C.). We would like to express our gratitude to Professor Jing Yuan from Huazhong University of Science and Technology for her valuable suggestions.

Author information

Authors and Affiliations

  1. School of Biomedical Engineering, Southern Medical University, Guangzhou, China

    Qiqi Lu, Qianjin Feng & Shenghua Cheng

  2. Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China

    Qiqi Lu, Qianjin Feng & Shenghua Cheng

  3. Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China

    Qiqi Lu, Qianjin Feng & Shenghua Cheng

  4. MOE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China

    Xiuli Liu & Shaoqun Zeng

Authors
  1. Qiqi Lu
    View author publications

    Search author on:PubMed Google Scholar

  2. Xiuli Liu
    View author publications

    Search author on:PubMed Google Scholar

  3. Qianjin Feng
    View author publications

    Search author on:PubMed Google Scholar

  4. Shaoqun Zeng
    View author publications

    Search author on:PubMed Google Scholar

  5. Shenghua Cheng
    View author publications

    Search author on:PubMed Google Scholar

Contributions

S.C. and S.Z. supervised the research. S.C., Q.L., and Q.F. conceived and initiated the project. Q.L. and S.C. developed and implemented the algorithm. Q.L., S.C., and X.L. designed and performed the validation experiments. Q.L. developed the napari plugin. Q.L. and S.C. wrote the manuscript under the supervision of S.C., Q.F., and X.L. All authors discussed the results and revised the manuscript.

Corresponding authors

Correspondence to Shaoqun Zeng or Shenghua Cheng.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks Austin Lefebvre and Bo Yan for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information (download PDF )

Description of Additional Supplementary Files (download PDF )

Reporting Summary (download PDF )

Supplementary Data 1 (download XLSX )

Supplementary Data 2 (download XLSX )

Supplementary Data 3 (download XLSX )

Supplementary Data 4 (download XLSX )

Supplementary Movie 1 (download MP4 )

Transparent Peer Review file (download PDF )

Source data

Source Data (download XLSX )

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, Q., Liu, X., Feng, Q. et al. A foundation model for multi-task cross-distribution restoration of fluorescence microscopy images. Nat Commun (2026). https://doi.org/10.1038/s41467-026-70307-4

Download citation

  • Received: 29 August 2025

  • Accepted: 24 February 2026

  • Published: 10 March 2026

  • DOI: https://doi.org/10.1038/s41467-026-70307-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing