Abstract
Deep learning has demonstrated remarkable abilities in restoring fluorescence microscopy images degraded by noise, blur, or undersampling. However, most existing models are task-specific and trained on limited, homogeneously distributed data, which restricts their generalizability and practicality. Here, we present FluoResFM, a foundation model for multi-task and cross-distribution fluorescence microscopy image restoration in a unified model. FluoResFM leverages textual prior information to adapt to specific tasks and data distributions. Trained on datasets across three tasks (image denoising, deconvolution, and super-resolution) and over 20 biological structures, FluoResFM demonstrates superior restoration performance and enhanced generalization across datasets with varied biological structures and imaging conditions. Through fine-tuning with only a single sample, FluoResFM can further improve its performance on unseen data, achieving results comparable to conventional models trained on hundreds of samples, and be easily adapted to additional tasks, including 3D image restoration, surface projection, isotropic reconstruction, and super-resolution with various scale factors. Moreover, the performance of existing cell/organelle segmentation models can be enhanced using the high-quality images restored by FluoResFM.
Similar content being viewed by others
Data availability
All the datasets used in this study are derived from existing literature and are publicly accessible through the corresponding links provided in Supplementary Table 1-4. Example data for training and testing are available on Zenodo repository (https://doi.org/10.5281/zenodo.18382702). Other raw data (e.g., image raw data underlying the figures and intermediate data for deep learning) are available from the corresponding author upon request due to their large file size. Source data are provided with this paper.
Code availability
The source codes are available via GitHub repository (https://github.com/qiqi-lu/fluoresfm) and Zenodo repository64 (https://doi.org/10.5281/zenodo.18383925). The napari plugin code is publicly accessible via GitHub at https://github.com/qiqi-lu/napari-fluoresfm. The guidance video on using the napari plugin is available at https://www.bilibili.com/video/BV16JeFzuEof. The pre-trained text and image encoder from BiomedCLIP and the pre-trained FluoResFM model are publicly accessible on Zenodo repository (https://doi.org/10.5281/zenodo.18382702).
References
Schermelleh, L. et al. Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72–84 (2019).
Scherf, N. & Huisken, J. The smart and gentle microscope. Nat. Biotechnol. 33, 815–818 (2015).
Goodwin, P. C. Quantitative deconvolution microscopy. in Methods in Cell Biology vol. 123 177–192 (Elsevier, 2014).
Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).
Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021).
Li, Y. et al. Incorporating the image formation process into deep learning improves network performance. Nat. Methods 19, 1427–1437 (2022).
Qiao, C. et al. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat. Commun. 15, 4180 (2024).
Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).
Chen, R. et al. Single-frame deep-learning super-resolution microscopy for intracellular dynamics imaging. Nat. Commun. 14, 2854 (2023).
Pawley, J. B. Handbook of Biological Confocal Microscopy. Springer: New York, NY, 2006.
Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019).
Kondepudi, A. et al. Foundation models for fast, label-free detection of glioma infiltration. Nature 637, 439–445 (2025).
Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).
Wang, X. et al. A pathology foundation model for cancer diagnosis and prognosis prediction. Nature 634, 970–978 (2024).
Xiang, J. et al. A vision–language foundation model for precision oncology. Nature 638, 769–778 (2025).
Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181–188 (2024).
Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).
Sun, Y., Wang, L., Li, G., Lin, W. & Wang, L. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat. Biomed. Eng. 9, 521–538 (2025).
Ma, C., Tan, W., He, R. & Yan, B. Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration. Nat. Methods 21, 1558–1567 (2024).
Jiao, J. et al. USFM: A universal ultrasound foundation model generalized to tasks and organs towards label efficient image analysis. Med. Image Anal. 96, 103202 (2024).
Yan, Q. et al. Textual prompt guided image restoration. Eng. Appl. Artif. Intell. 155, 110981 (2025).
Bai, Y. et al. TextIR: A Simple Framework for Text-based Editable Image Restoration. IEEE Trans. Vis. Comput. Graph. 1–16 (2025) https://doi.org/10.1109/TVCG.2025.3550844.
Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. M. & Frangi, A. F.) vol. 9351 234–241 (Springer International Publishing, 2015).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 10674–10685 (IEEE, 2022). https://doi.org/10.1109/CVPR52688.2022.01042.
Vaswani, A. et al. Attention is All you Need. In Advances in Neural Information Processing Systems (eds Guyon, I. et al.). Vol. 30 (Curran Associates, Inc., 2017).
Zhang, S. et al. A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs. NEJM AI 2, (2025).
Reed, S. et al. Generative Adversarial Text to Image Synthesis. In Proceedings of The 33rd International Conference on Machine Learning 1060–1069 (PMLR, 2016).
Bai, Y. et al. TextIR: A Simple Framework for Text-based Editable Image Restoration. IEEE Trans. Visual. Comput. Graphics 1–16 (2025) https://doi.org/10.1109/TVCG.2025.3550844.
Lu, Y. et al. Priors in Deep Image Restoration and Enhancement: A Survey. Preprint at https://doi.org/10.48550/arXiv.2206.02070 (2023).
Theodoris, C. V. et al. Transfer learning enables predictions in network biology. Nature 618, 616–624 (2023).
Liao, T. et al. A super-resolution strategy for mass spectrometry imaging via transfer learning. Nat. Mach. Intell. 5, 656–668 (2023).
Boutros, M., Heigwer, F. & Laufer, C. Microscopy-based high-content screening. Cell 163, 1314–1325 (2015).
Pepperkok, R. & Ellenberg, J. High-throughput fluorescence microscopy for systems biology. Nat. Rev. Mol. Cell Biol. 7, 690–696 (2006).
Pachitariu, M., Rariden, M. & Stringer, C. Cellpose-SAM: superhuman generalization for cellular segmentation. Preprint at https://doi.org/10.1101/2025.04.28.651001 (2025).
Lefebvre, A. E. Y. T. et al. Nellie: automated organelle segmentation, tracking and hierarchical feature extraction in 2D/3D live-cell microscopy. Nat. Methods 22, 751–763 (2025).
Sofroniew, N. et al. Napari: a multi-dimensional image viewer for Python. Zenodo https://doi.org/10.5281/zenodo.12722145 (2024).
Guo, M. et al. Deep learning-based aberration compensation improves contrast and resolution in fluorescence microscopy. Nat. Commun. 16, 313 (2025).
Verveer, P. J. Advanced Fluorescence Microscopy: Methods and Protocols. vol. 1251 (Springer New York, New York, NY, 2015).
Esser, P. et al. Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. In Proceedings of the 41st International Conference on Machine Learning 12606–12633 (PMLR, 2024).
Video generation models as world simulators. https://openai.com/index/video-generation-models-as-world-simulators/ (2024).
Liang, J. et al. SwinIR: Image Restoration Using Swin Transformer. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 1833–1844 (IEEE, Montreal, BC, Canada, 2021). doi:10.1109/ICCVW54120.2021.00210.
Zamir, S. W. et al. Restormer: Efficient Transformer for High-Resolution Image Restoration. in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 5718–5729 (IEEE, 2022). https://doi.org/10.1109/CVPR52688.2022.00564.
Tay, Y., Dehghani, M., Bahri, D. & Metzler, D. Efficient Transformers: A Survey. ACM Comput. Surv. 55, 1–28 (2023).
Zhou, R. et al. W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping. in Computer Vision–ECCV 2020 Workshops (eds Bartoli, A. & Fusiello, A.) vol. 12535 474–491 (Springer International Publishing, 2020).
Zhang, Y. et al. A Poisson-Gaussian Denoising Dataset With Real Fluorescence Microscopy Images. in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 11702–11710 (IEEE, 2019). https://doi.org/10.1109/CVPR.2019.01198.
Yao, K. et al. Scaffold-A549: a benchmark 3D fluorescence image dataset for unsupervised nuclei segmentation. Cogn. Comput. 13, 1603–1608 (2021).
Spahn, C. et al. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun. Biol. 5, 688 (2022).
Belharbi, S. et al. SR-CACO-2: a dataset for confocal fluorescence microscopy image super-resolution. In Proceedings of the 38th International Conference on Neural Information Processing Systems vol. 37, 59948–59983 (Curran Associates Inc., Red Hook, NY, USA, 2024).
Markwirth, A. et al. Video-rate multi-color structured illumination microscopy with simultaneous real-time reconstruction. Nat. Commun. 10, 4315 (2019).
Svoboda, D., Homola, O. & Stejskal, S. Generation of 3D Digital Phantoms of Colon Tissue. In Image Analysis and Recognition (eds Kamel, M. & Campilho, A.) vol. 6754, 31–39 (Springer Berlin Heidelberg, 2011).
Svoboda, D., Kozubek, M. & Stejskal, S. Generation of digital phantoms of cell nuclei and simulation of image formation in 3D image cytometry. Cytometry A. 75A, 494–509 (2009).
Loshchilov, I. & Hutter, F. Decoupled Weight Decay Regularization. in International Conference on Learning Representations (Open Review.net, 2019).
Van Der Walt, S. et al. scikit-image: image processing in Python. PeerJ. 2, e453 (2014).
Wang, Z., Simoncelli, E. P. & Bovik, A. C. Multiscale structural similarity for image quality assessment. in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 1398–1402 (IEEE, 2003). https://doi.org/10.1109/ACSSC.2003.1292216.
Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).
Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337–1346 (2020).
Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).
Fang, L. et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 18, 406–416 (2021).
Saraiva, B. M. et al. Efficiently accelerated bioimage analysis with NanoPyx, a Liquid Engine-powered Python framework. Nat. Methods 22, 283–286 (2025).
Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019).
Fang, S. et al. Resolution assessment of super-resolution microscopy imaging: structural and technical dependencies for cell biology. Cytotechnology 77, 170 (2025).
Stringer, C. & Pachitariu, M. Cellpose3: one-click image restoration for improved cellular segmentation. Nat. Methods 22, 592–599 (2025).
Zhang, Z., Nishimura, Y. & Kanchanawong, P. Extracting microtubule networks from superresolution single-molecule localization microscopy data. Mol. Biol. Cell 28, 333–345 (2017).
Lu, Q., Liu, X., Feng, Q., Zeng, S. & Cheng, S. A foundation model for multi-task cross-distribution restoration of fluorescence microscopy image. Zenodo https://doi.org/10.5281/zenodo.18383925 (2026).
Acknowledgements
This work is supported by grants from the National Natural Science Foundation of China (grant 62471212 and 62201221 to S.C.), Guangdong Basic and Applied Basic Research Foundation (2025A1515011333 to S.C.), Science and Technology Projects in Guangzhou (grant 2024A04J4960 to S.C.). We would like to express our gratitude to Professor Jing Yuan from Huazhong University of Science and Technology for her valuable suggestions.
Author information
Authors and Affiliations
Contributions
S.C. and S.Z. supervised the research. S.C., Q.L., and Q.F. conceived and initiated the project. Q.L. and S.C. developed and implemented the algorithm. Q.L., S.C., and X.L. designed and performed the validation experiments. Q.L. developed the napari plugin. Q.L. and S.C. wrote the manuscript under the supervision of S.C., Q.F., and X.L. All authors discussed the results and revised the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Austin Lefebvre and Bo Yan for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Source data
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lu, Q., Liu, X., Feng, Q. et al. A foundation model for multi-task cross-distribution restoration of fluorescence microscopy images. Nat Commun (2026). https://doi.org/10.1038/s41467-026-70307-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467-026-70307-4


