Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Self-inspired learning for denoising live-cell super-resolution microscopy

Abstract

Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Workflow and simulation validation of SN2N.
Fig. 2: Systematical evaluations in SD-SIM experiments using known structures.
Fig. 3: Multicolor live-cell SR imaging enabled by RL-SN2N on SD-SIM.
Fig. 4: 3D RL-SN2N on SD-SIM unlocks fast long-term imaging across 5D.
Fig. 5: SN2N and RL-SN2N permit long-term live-cell STED imaging.
Fig. 6: Integration of SN2N and SOFI massively improves the SR reconstruction efficiency.

Similar content being viewed by others

Data availability

We provided two representative datasets from Figs. 1 and 3b available at https://github.com/WeisongZhao/SN2N/. All other data that support the findings of this study are available from the corresponding author upon request.

Code availability

Videos were produced with Microsoft PowerPoint and our lightweight MATLAB framework, which is available at https://github.com/WeisongZhao/img2vid/. The percentile normalization method has been written as a Fiji/ImageJ plugin and can be found at https://github.com/WeisongZhao/percentile_normalization.imagej/. The tutorials and the updated version of our SN2N can be found at https://github.com/WeisongZhao/SN2N/.

References

  1. Schermelleh, L. et al. Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72–84 (2019).

    Article  CAS  PubMed  Google Scholar 

  2. Schermelleh, L. et al. Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy. Science 320, 1332–1336 (2008).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Lawo, S., Hasegan, M., Gupta, G. D. & Pelletier, L. Subdiffraction imaging of centrosomes reveals higher-order organizational features of pericentriolar material. Nat. Cell Biol. 14, 1148–1158 (2012).

    Article  CAS  PubMed  Google Scholar 

  4. Szymborska, A. et al. Nuclear pore scaffold structure analyzed by super-resolution microscopy and particle averaging. Science 341, 655–658 (2013).

    Article  CAS  PubMed  Google Scholar 

  5. Xu, K., Zhong, G. & Zhuang, X. Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons. Science 339, 452–456 (2013).

    Article  CAS  PubMed  Google Scholar 

  6. Mishin, A. & Lukyanov, K. Live-cell super-resolution fluorescence microscopy. Biochemistry 84, 19–31 (2019).

    CAS  Google Scholar 

  7. Godin, A. G., Lounis, B. & Cognet, L. Super-resolution microscopy approaches for live cell imaging. Biophys. J. 107, 1777–1784 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Valli, J. et al. Seeing beyond the limit: a guide to choosing the right super-resolution microscopy technique. J. Biol. Chem. 297, 100791 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Luisier, F., Vonesch, C., Blu, T. & Unser, M. Fast interscale wavelet denoising of Poisson-corrupted images. Signal Process. 90, 415–427 (2010).

    Article  Google Scholar 

  10. Huang, X. et al. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol. 36, 451–459 (2018).

    Article  CAS  PubMed  Google Scholar 

  11. Mandracchia, B. et al. Fast and accurate sCMOS noise correction for fluorescence microscopy. Nat. Commun. 11, 94 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Weisong, Z. et al. Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy. Nat. Biotechnol. 40, 606–617 (2022).

    Article  Google Scholar 

  13. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019).

    Article  CAS  PubMed  Google Scholar 

  14. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  CAS  PubMed  Google Scholar 

  15. Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021).

    Article  CAS  PubMed  Google Scholar 

  16. Fang, L. et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 18, 406–416 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Lehtinen, J. et al. Noise2Noise: learning image restoration without clean data. In Proceedings of the 35th International Conference on Machine Learning (eds Dy, J. & Krause, A.) 2965–2974 (PMLR, 2018).

  18. Batson, J. & Royer, L. Noise2self: blind denoising by self-supervision. In Proceedings of the 36th International Conference on Machine Learning (eds Chaudhuri, K. & Salakhutdinov, R.) 524–533 (PMLR, 2019).

  19. Krull, A., Buchholz, T. -O. & Jug, F. Noise2void—learning denoising from single noisy images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2129–2137 (2019).

  20. Eom, M. et al. Statistically unbiased prediction enables accurate denoising of voltage imaging data. Nat. Methods 20, 1581–1592 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Zhang, G. et al. Bio-friendly long-term subcellular dynamic recording by self-supervised image enhancement microscopy. Nat. Methods 20, 1957–1970 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  22. Li, X. et al. Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nat. Methods 18, 1395–1400 (2021).

    Article  CAS  PubMed  Google Scholar 

  23. Lecoq, J. et al. Removing independent noise in systems neuroscience data using DeepInterpolation. Nat. Methods 18, 1401–1408 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Qiao, C. et al. Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat. Biotechnol. 41, 367–377 (2023).

    Article  CAS  PubMed  Google Scholar 

  25. Muller, C. B. & Enderlein, J. Image scanning microscopy. Phys. Rev. Lett. 104, 198101 (2010).

    Article  PubMed  Google Scholar 

  26. Hayashi, S. & Okada, Y. Ultrafast superresolution fluorescence imaging with spinning disk confocal microscope optics. Mol. Biol. Cell 26, 1743–1751 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Hell, S. W. & Wichmann, J. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett. 19, 780–782 (1994).

    Article  CAS  PubMed  Google Scholar 

  28. Vicidomini, G. et al. Sharper low-power STED nanoscopy by time gating. Nat. Methods 8, 571–573 (2011).

    Article  CAS  PubMed  Google Scholar 

  29. Sun, D.-E. et al. Click-ExM enables expansion microscopy for all biomolecules. Nat. Methods 18, 107–113 (2021).

    Article  CAS  PubMed  Google Scholar 

  30. Dertinger, T., Colyer, R., Iyer, G., Weiss, S. & Enderlein, J. Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI). Proc. Natl Acad. Sci. USA 106, 22287–22292 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Zhao, W. et al. Enhanced detection of fluorescence fluctuations for high-throughput super-resolution imaging. Nat. Photonics 17, 806–813 (2023).

    Article  CAS  Google Scholar 

  32. Born, M. & Wolf, E. Principles of Optics, 7th Edn (Cambridge University Press, 1999).

  33. Lequyer, J., Philip, R., Sharma, A., Hsu, W. -H. & Pelletier, L. A fast blind zero-shot denoiser. Nat. Mach. Intell. 4, 953–963 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  34. Li, X. et al. Spatial redundancy transformer for self-supervised fluorescence image denoising. Nat. Comput. Sci. 3, 1067–1080 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Chen, X. et al. Self-supervised denoising for multimodal structured illumination microscopy enables long-term super-resolution live-cell imaging. PhotoniX 5, 1–22 (2024).

    Article  Google Scholar 

  36. Stein, S. C., Huss, A., Hähnel, D., Gregor, I. & Enderlein, J. Fourier interpolation stochastic optical fluctuation imaging. Opt. Express 23, 16154–16163 (2015).

    Article  CAS  PubMed  Google Scholar 

  37. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (2015).

  38. Yun, S. et al. Cutmix: regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF Conference on ICCV, 6023–6032 (2019).

  39. Nieuwenhuizen, R. P. et al. Measuring image resolution in optical nanoscopy. Nat. Methods 10, 557–562 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  PubMed  Google Scholar 

  41. Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 30, 6402–6413 (2017).

    Google Scholar 

  42. Prakash, M., Lalit, M., Tomancak, P., Krul, A. & Jug, F. Fully unsupervised probabilistic Noise2Void. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 154–158 (2020).

  43. Quan, Y., Chen, M., Pang, T. & Ji, H. Self2self with dropout: learning self-supervised denoising from single image. In Proceedings of the IEEE/CVF Conference on CVPR, 1890–1898 (2020).

  44. Pang, T., Zheng, H., Quan, Y. & Ji, H. Recorrupted-to-recorrupted: unsupervised deep learning for image denoising. In Proceedings of the IEEE/CVF Conference on CVPR, 2043–2052 (2021).

  45. Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–59 (1972).

    Article  Google Scholar 

  46. Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745–754 (1974).

    Article  Google Scholar 

  47. Guo, Y. et al. Visualizing intracellular organelle and cytoskeletal interactions at nanoscale resolution on millisecond timescales. Cell 175, 1430–1442 (2018).

    Article  CAS  PubMed  Google Scholar 

  48. Lu, M. et al. The structure and global distribution of the endoplasmic reticulum network are actively regulated by lysosomes. Sci. Adv. 6, eabc7209 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  49. Lu, M. et al. ERnet: a tool for the semantic segmentation and quantitative analysis of endoplasmic reticulum topology. Nat. Methods 20, 569–579 (2023).

    Article  CAS  PubMed  Google Scholar 

  50. Sekh, A. A. et al. Physics-based machine learning for subcellular segmentation in living cells. Nat. Mach. Intell. 3, 1071–1080 (2021).

    Article  Google Scholar 

  51. Harke, B. et al. Resolution scaling in STED microscopy. Opt. Express 16, 4154–4162 (2008).

    Article  PubMed  Google Scholar 

  52. Tortarolo, G. et al. Focus image scanning microscopy for sharp and gentle super-resolved microscopy. Nat. Commun. 13, 7723 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  53. Liu, T. et al. Multi-color live-cell STED nanoscopy of mitochondria with a gentle inner membrane stain. Proc. Natl Acad. Sci. USA 119, e2215799119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  54. Wen, G. et al. High-fidelity structured illumination microscopy by point-spread-function engineering. Light Sci. Appl. 10, 70 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  55. Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).

    Article  CAS  PubMed  Google Scholar 

  56. Zhang, Y. et al. Mitochondria determine the sequential propagation of the calcium macrodomains revealed by the super-resolution calcium lantern imaging. Sci. China Life Sci. 63, 1543–1551 (2020).

  57. Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J. & Maier-Hein, K. H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021).

    Article  CAS  PubMed  Google Scholar 

  58. Mirza, M. & Osindero, S. Conditional generative adversarial nets. Preprint at https://arxiv.org/abs/1411.1784 (2014).

  59. Cao, H. et al. Swin-Unet: Unet-like pure transformer for medical image segmentation. In European Conference on Computer Vision, 205–218 (2022).

  60. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 448–456 (2015).

  61. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).

  62. Biggs, D. S. & Andrews, M. Acceleration of iterative image restoration algorithms. Appl. Opt. 36, 1766–1775 (1997).

    Article  CAS  PubMed  Google Scholar 

  63. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. 9, 62–66 (1979).

    Google Scholar 

  64. Ershov, D. et al. TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat. Methods 19, 829–832 (2022).

    Article  CAS  PubMed  Google Scholar 

  65. Qian, H., Sheetz, M. P. & Elson, E. L. Single particle tracking. Analysis of diffusion and flow in two-dimensional systems. Biophys. J. 60, 910–921 (1991).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  66. Ba, Q., Raghavan, G., Kiselyov, K. & Yang, G. Whole-cell scale dynamic organization of lysosomes revealed by spatial statistical analysis. Cell Rep. 23, 3591–3606 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  67. Damenti, M., Coceano, G., Pennacchietti, F., Boden, A. & Testa, I. STED and parallelized RESOLFT optical nanoscopy of the tubular endoplasmic reticulum and its mitochondrial contacts in neuronal cells. Neurobiol. Dis. 155, 105361 (2021).

    Article  CAS  PubMed  Google Scholar 

  68. Zhao, W. et al. Quantitatively mapping local quality of super-resolution microscopy by rolling Fourier ring correlation. Light Sci. Appl. 12, 298 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  69. Li, M. et al. LuckyProfiler: an ImageJ plug-in capable of quantifying FWHM resolution easily and effectively for super-resolution images. Biomed. Opt. Express 13, 4310–4325 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  70. Lee, J. et al. Versatile phenotype-activated cell sorting. Sci. Adv. 6, eabb7438 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  71. Zhang, X. et al. Development of a reversibly switchable fluorescent protein for super-resolution optical fluctuation imaging (SOFI). ACS Nano 9, 2659–2667 (2015).

    Article  CAS  PubMed  Google Scholar 

  72. Tillberg, P. et al. Protein-retention expansion microscopy of cells and tissues labeled using standard fluorescent proteins and antibodies. Nat. Biotechnol. 34, 987–992 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank the assistance of T. Liu from Z. Chen’s laboratory at the Peking university for STED imaging of PKMO-labeled mitochondrial cristae. This work was supported by the National Key Research and Development Program of China (grant no. 2022YFC3400600 to L.C.), the National Natural Science Foundation of China (grant nos. 32422052 to W.Z., 62305083 to W.Z., T2222009 to H.L., 32227802 to L.C., 21927813 to L.C., 81925022 to L.C., 92054301 to L.C., 32301257 to S.Z., 32071458 to H. M.), the Young Elite Scientists Sponsorship Program by China Association for Science and Technology (grant no. 2023QNRC001 to W.Z.), and the Heilongjiang Provincial Postdoctoral Science Foundation (grant no. LBH-Z22027 to W.Z.), the Natural Science Foundation of Heilongjiang Province (grant no. YQ2021F013 to H.L.), the Beijing Natural Science Foundation (grant no. Z20J00059 to L.C.), the Nanyang Assistant Professorship Start-up Grant, and National Research Foundation of Singapore (grant no. NRF-CRP29-2022-0003 to G.H.) and the Guangdong Basic and Applied Basic Research Foundation (grant no. 2022A1515011683 to J.H.). L.C. acknowledges support by the High-performance Computing Platform of Peking University.

Author information

Authors and Affiliations

Authors

Contributions

W.Z. conceived the research; L.Q. implemented the corresponding software; S.Z., X.Y. and K.W. performed the experiments and collected the data; Q.L. analyzed the data and prepared the figures; L.Q. and Y.H. prepared the videos; X.L., H.M., G.H., W.C., C.G., J.H., J.T., H.L. and L.C. participated in discussions during the development of the manuscript; W.Z. and L.Q. wrote the manuscript with input from all authors; W.Z., H.L. and L.C. supervised the project. All authors participated in the discussions and data interpretation.

Corresponding author

Correspondence to Weisong Zhao.

Ethics declarations

Competing interests

L.C., H.L., W.Z. and L.Q. have a pending patent application on the presented framework. The remaining authors declare no competing interests.

Peer review

Peer review information

Nature Methods thanks Laurence Pelletier, Yide Zhang, Jiji Chen and the other, anonymous, reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Workflow of SN2N and network architectures.

a, Detailed flow diagram of SN2N (Methods). Steps 1-3, self-supervised data generation. First, the data pre-augmentation (optional) is performed using the Patch2Patch (random patch transformations in multiple dimensions) strategy. After that, a sliding window approach is employed to generate small patches suitable for input into the network for training. Subsequently, the spatial diagonal resampling strategy followed by Fourier upsampling is used to create paired SN2N data. Additionally, basic augmentations such as rotation and flipping (optional) are applied to the generated data pairs. Step 4: self-constrained learning process. SN2N utilizes the classical U-Net network and selects either the 2D U-Net or 3D U-Net based on the input data dimensions. The generated paired images are considered as one training example, and the resulting two predictions are used to calculate the loss for back propagation. b, Patch2Patch (P2P) pipeline (Methods). It includes three available modes for augmentation along the temporal axis, in a single image, and between different experiments.

Extended Data Fig. 2 Systematical tests of different components of SN2N.

a, SN2N denoising results under different pixel sizes with same resolution. From top to bottom: Raw images, SN2N results, and clean ground truth images. The synthetic structures (16.25 nm pixel size) were convolved with a 150 nm size PSF and downsampled by 2, 3, 4, 5, 6, 8, and 16 times. SSIM and PSNR values of SN2N results are marked on the bottom right corner. b, SSIM (top) and PSNR (bottom) values from data under different downsampling rates. c, Ablation tests for individual components of SN2N. From left to right: Raw (top)/ground truth (bottom), without downsampling process (Raw2Raw, the same noisy image as both input and label), with downsampling only, including the Fourier up-sampling step, integrating the self-constrained learning process, and supplementing our Patch2Patch augmentation. We found that the network was not able to execute denoising without the downsampling step, and the upsampling step enforced the consistency of the predicted structural scale. The self-constrained learning process strengthened the data-efficiency and performance, and the addition of Patch2Patch further maximized the data efficiency. d, SSIM values of different components in SN2N. e, SN2N denoising results under different interpolation methods. From left to right: Raw input, SN2N results using data without interpolation, with bilinear interpolation, and our Fourier interpolation as training sets, and ground truth image. f, SSIM values of SN2N under different interpolation strategies. g, SN2N results under different self-constrained regularization weights (values labeled on the top left corners). h, Average SSIM values under different self-constrained regularization weights (n = 10). i, SN2N denoising results under different photon levels (from several hundred to single photon, Methods). j, SSIM values under different photon levels. In a, e, g, and i, the models were trained with 50 frames (full data case). In c, the models were trained with both 50 frames (full data case) and one image (1/50 data case). In a, c, e, and g, the models were trained under noise level 1 conditions. Error bars: s.e.m. Experiments were repeated ten times independently with similar results; scale bars, 1 µm.

Extended Data Fig. 3 Testing results of different noise levels and data amounts.

a, Denoising results of various methods under three different noise levels (Level 3, Level 2, and Level 1, from top to bottom) using the full training set. From left to right: Raw input, denoising results of PURE, ACsN, N2V, supervised learning ('Supervised'), SN2N without constraint ('SN2N w/o c'), and full SN2N. b, Quantitative comparisons of the results shown in a using PSNR (left), SSIM (middle), and RMSE (left) metrics (n = 10, mearsurements). c, Denoising results of learning-based methods using three different amounts (1/50, 1/5, and full data, from top to bottom) of training data under Level 1 of noise. d, Quantitative comparison of the results shown in c using PSNR (left), SSIM (middle), and RMSE (left) metrics (n = 10, mearsurements). k denotes the slope (red lines) of the corresponding metric values along the data increment. e-f, Data uncertainty (e) and model uncertainty (f) of neural network models trained by different data amounts. Average standard derivation (STD) values calculated from ten predictions of ten repetitively acquired data or ten repetitively trained models. Centerline, medians; limits, 75% and 25%; whiskers, maximum and minimum; error bars, s.e.m. Experiments were repeated three times independently with similar results; scale bars, 1 µm.

Extended Data Fig. 4 Comparisons of SN2N versus RL-SN2N using SD-SIM and applying SN2N on SD-confocal microscopy.

a, Zoomed-in views (top) and FWHM distribution plots (bottom, calculated by LuckyProfiler) of denoising results by different methods (c.f., Fig. 2a). b, Comparison of SN2N and RL-SN2N. Top: SD-SIM (left) and its RL result (right); Bottom SN2N result (left) and RL-SN2N result (right). c, LRQ values of results in b. d, SN2N denoising results (right) of SD confocal image (left) recording the Argo-SIM slide. e, LRQ values of results in d. f, Comparisons of SN2N with 2D U-Net (left), SN2N with 3D U-Net (middle), and RL-SN2N with 3D U-Net (right) of volumetric data (c.f., Fig. 3b). g, Magnified views and their xz and yz cross-sections from white boxed regions in f. Experiments were repeated three times independently with similar results; scale bars, 500 nm (a, b, d); 1 µm (f, g).

Extended Data Fig. 5 SN2N-empowered automated subcellular segmentation and tracking.

a, Workflow. Step 1, RL deconvolution; Step 2, RL-SN2N inference; Step 3, segmentation; Step 4, tracking. Step 5, extraction of motion features. Step 6, classification. Step 7, topology graph construction. Step 8, specific downstream analysis. b, A representative example for dual-color SR imaging of mitochondria (Mito, green) and ER (magenta) labeled with Tom20–mCherry and Sec61β-EGFP in live COS-7 cells under raw SD-SIM (left) and RL-SN2N (right). c, The white box in b is enlarged and shown at seven time points under different configurations. From top to bottom: Raw SD-SIM, dual-color RL-SN2N, single-channel (Mito) RL-SN2N, RL SD-SIM segmentation, and RL-SN2N segmentation results. The yellow and white arrows indicate the mitochondrial fission and before fission, respectively. d, e, Results of Mito (d) and ER (e) segmentations (first row) using the Otsu hard threshold (first column) and Mitonet/ERnet (second column) and their skeletonizations (second row) under SD-SIM (left) and RL-SN2N (right). f, Otsu segmentation results for Lys (red) and GA (blue) under SDSIM (left) and RL-SN2N (right). g, A representative 4-color segmentation result under RL-SN2N. h, Spatial distribution of Lys assigned with different motion behaviors. i, Distribution of estimated α values of Lys versus their temporal average of minimum distances to Mito (n = 46). j, Distribution of the Lys-Mito MOCs’ standard deviation (S.D.) versus their mean values. k, Illustrations of the MSD curves for different motion behaviors of Lys. Curves are color-coded by the corresponding ER-Lys distances. Experiments were repeated three times independently with similar results; scale bars, 2 μm (b, c, e).

Extended Data Fig. 6 RL-SN2N can suppress noise in undersampled data from EMCCD SD-SIM.

a, 3D renderings of live COS-7 cells labeled with Hoechst (green) and MitoTracker Deep Red (magenta) under raw SD-SIM equipped with an EMCCD camera (94 nm pixel size versus <150 nm resolution). b, Representative lateral slices from volume in a. c, RL-SN2N results of a with additional 2× upsampling before RL deconvolution (47 nm pixel size). d, Representative lateral slices from volume in c. e, Another RL-SN2N time point. f, Representative lateral slices from volume in e. Experiments were repeated three times independently with similar results; scale bars, 5 μm.

Extended Data Fig. 7 Full data of SOFI-SN2N results and SN2N-assisted expansion microscopy (ExM-SN2N).

a, The whole field-of-views of the wide-field (top left), 20-frames 2nd order SOFI (2nd SOFI 20f, top right), 2D-SIM (bottom left), and 20-frame 2nd order SOFI-SN2N (bottom right) images (c.f., Fig. 6b). b, SN2N results of 2nd, 3rd, and 4th orders SOFI (from left to right) using 20, 50, 100, 200, 500, 1000 frames (from top to bottom) (c.f., Fig. 6e). c-e, Average SSIM values of 2nd (c), 3rd (d), and 4th (e) SOFI-SN2N results (n = 5, mearsurements). (f) Comparison of temporal and spatial sampling methods. From left to right: SOFI reconstruction, SN2N result using temporal sampling (the first 20 frames vs. the second 20 frames), and SN2N result using spatial sampling. g, A 2 times-expanded (2×, top) and 4-times expanded (4×, bottom) COS-7 cell was immunostained with a primary antibody against α-tubulin and a second antibody conjugated with Alexa Fluor 488 under wide-field microscopy (left) and its SN2N denoised result (right). Signal-to-background ratios (SBR) are labeled. h, Magnified views of the white boxed regions in g under ExM (top) and SN2N denoised results (bottom). i, Intensity profiles and multiple Gaussian fitting of the filaments indicated by the white arrows in h. Numbers represent the distances between peaks; a.u., arbitrary units. j, A 4.5-times expanded (4.5×) COS-7 cell labeled with Sec61β–GFP under wide-field microscopy (left) and its SN2N denoised result (right). k, Enlarged regions enclosed by the white box in j seen under ExM-4.5× (left) and its SN2N result (right). Centerline, medians; limits, 75% and 25%; whiskers, maximum and minimum; error bars, s.e.m. Experiments were repeated three times independently with similar results; scale bars, 2 µm (a), 1 µm (b, h, j, k), and 5 μm (g).

Extended Data Fig. 8 SN2N removes random, non-continuous artifacts in low-SNR SIM with two strategies.

a, Pipeline of SIM-SN2N using raw image resampling strategy (Methods). The self-supervised data generation is applied on the 9 raw images followed by the SIM reconstruction. b, d, f, clathrin-coated pits (CCPs, b), microtubules (d), and ER (f), recorded by SIM under low-SNR condition (SIM-L, left) and their SN2N results (right). c, e, g, SIM reconstructions (left) of CCPs (c), microtubules (e), and ER (g) under low (L, top), medium (M, middle), and high (H, bottom) SNR conditions and their SN2N results (right). SSIM values are labeled on the bottom right corners. h, The mitochondrial cristae structures in live COS-7 cells labeled with MitoTracker Green under 2D-SIM (bottom left boxed region) and SN2N-SIM imaging at the first time point. i, j, Representative montages of 11 time points from yellow boxed region in h under 2D-SIM (i) and SIM-SN2N (j). k, Workflow of SIM-SN2N using SIM image resampling strategy (Methods). After SIM reconstruction, we apply a SIM-specfic self-supervised data generation involving 3 × 3 pixels (1 + 3 + 7 + 9 versus 2 + 4 + 6 + 8) resampling followed by a 3× Fourier interpolation. l, n, p, CCPs (l), microtubules (n), and ER (p), recorded by SIM under low-SNR condition (left) and their SN2N results (right). m, o, q, SIM reconstructions (left) of CCPs (m), microtubules (o), and ER (q) under low (top), medium (middle), and high (bottom) SNR conditions and their SN2N results (right). SSIM values are labeled on the bottom right corners. r, A representative living COS-7 cell labeled with LifeAct–EGFP under ultrafast TIRF (left), TIRF-SIM (middle), and SIM-SN2N (right). s, t, Enlarged regions enclosed by the white box in r, under TIRF-SIM (s) and SIM-SN2N (t). Experiments were repeated three times independently with similar results. Scale bars, 2 μm (b, d, f, h) and 1 μm (c, e, g, j, r, t).

Extended Data Fig. 9 SN2N maintains the linear response of Ca2+ transients obtained by the SD-SIM.

a, A representative live COS-7 cell was transfected with GCaMP6s, stimulated with ATP (10 μM). One snapshot under the SD-SIM (left) and after the SN2N (right) were shown. b, Magnified views of regions enclosed by white boxes 1-4 in a. c, ATP stimulated calcium traces from corresponding macrodomains in b. d, Denoising of fast calcium transients using published two-photon microscopy dataset from ref. 22. From left to right: Low-SNR recording of somatic signals, DeepCAD data, SN2N (spatial) denoising results, SN2N (temporal) denoising results by replacing the spatial diagonal resampling with the temporally interleaved resampling, and the high-SNR data (tenfold imaging SNR). Magnified view of white-boxed regions is shown at the bottom row. e, Fluorescence traces extracted from the yellow boxed regions in d. The trajectories’ Pearson correlation coefficients (r) were labeled at the bottom right corners. Experiments were repeated three times independently with similar results. Scale bars, 5 µm (a), 2 µm (b), and 50 µm (d).

Extended Data Fig. 10 Generalization of SN2N across different SNR conditions and pixel sizes (structural scales).

a-f, Testing generalization across different SNR conditions. Color-coded 3D distributions and their xz and yz cross-sections of all mitochondria (labeled with Tom20–mCherry) in a live COS-7 cell (c.f., Fig. 3b). a-c, SN2N results (trained with the first volume) of the first volume (0 min) (a) and the last volume (2.5 min) (b), and the SN2N prediction of SN2N perdition (SN2N2) from the last SD-SIM volume (c). d, e, SN2N results (trained with the last SD-SIM volume) of the first (d) and the last (e) volume. f, Zoom-in views from white-boxed regions in a-e. First column: 0 min (top) and 2.5 min (bottom) SD-SIM; second column: SN2N and SN2N2 (bottom half of bottom) results (trained with 0 min SD-SIM volume) of 0 min (top) and 2.5 min (bottom) SD-SIM; SN2N results (trained with 2.5 min SD-SIM volume) of 0 min (top) and 2.5 min (bottom) SD-SIM. g-k, Testing generalization across different pixel sizes. g, Nuclear pores in HeLa cells were labeled with an anti-Mab414 primary antibody and the Alexa594 secondary antibody and observed under STED and STED-SN2N configurations. h, STED images (left) of 20.66 nm pixel size (top), 7.10 nm pixel size (middle), and 20.66 nm pixel size subsampled from 7.10 nm (bottom), and their SN2N results (right) from model trained by data of 20.66 nm pixel size. i, STED images (left) of 7.10 nm pixel size (top), 20.66 nm pixel size (middle), and 7.10 nm pixel size Fourier upsampled from 20.66 nm (bottom), and their SN2N results (right) from the model trained by data of 7.10 nm pixel size. j, k, Average FWHM values of STED (gray) and SN2N results (yellow) from the model trained by data of 20.66 nm pixel size (j) and 7.10 nm pixel size (k) (n = 5, measurements). Centerline, medians; limits, 75% and 25%; whiskers, maximum and minimum; error bars, s.e.m. Experiments were repeated three times independently with similar results. Scale bars, 5 µm (e), 1 µm (f-h).

Supplementary information

Supplementary Information

Supplementary Figs. 1–12, Supplementary Tables 1–8, and captions for Supplementary Videos 1–9.

Reporting Summary

Supplementary Video 1

SN2N workflows. Part I provides the detailed training workflow of SN2N (Extended Data Fig. 1). Part II showcases the workflows of SN2N, RL-SN2N, 3D RL-SN2N, SOFI-SN2N, SIM-SN2N (raw resampling) and SIM-SN2N (SIM resampling).

Supplementary Video 2

SN2N eliminates noise and noise fluctuations in known structures. Part I shows the comparisons of SD-SIM, PURE, N2V and SN2N on the commercial Argo-SIM slide under the SpinSR10 SD-SIM system across various exposure conditions (Fig. 2a). Part II shows the noise fluctuations captured by SD-SIM and SN2N under 1× exposure condition.

Supplementary Video 3

RL-SN2N enables dual-color live-cell SR SD-SIM imaging. Part I shows the fission events of mitochondria (Mito, green) and ER (magenta) labeled with Tom20-mCherry and Sec61β-EGFP in live COS-7 cells captured by SD-SIM and RL-SN2N (Extended Data Fig. 5b). The yellow arrows indicate the mitochondrial fissions. Part II demonstrates a live-cell SR imaging gallery of live COS-7 cells labeled with Tom20-mCherry (Mito, green), Sec61β-EGFP (ER, magenta), Lamp1–mCherry (Lys, cyan), Tubulin-EGFP (Tubulin, red), or MitoTracker (Mito, orange) captured by SD-SIM and RL-SN2N.

Supplementary Video 4

RL-SN2N unlocks four-color, live-cell SR imaging and facilitates the downstream analysis. Part I shows the four-color imaging of Mito (green), ER (gray), lysosome (red) and GA (blue) labeled with MitoTracker Deep Red FM, Sec61β-EGFP, Lamp1–mCherry and Golgi-BFP in live COS-7 cells under raw SD-SIM and RL-SN2N (Fig. 3c). Part II highlights the four-color segmentation results under RL-SN2N. Part III provides the trajectories and spatial distributions of lysosomes assigned with different motion behaviors. Part IV tracks multiple organelle interaction events captured by RL-SN2N.

Supplementary Video 5

3D RL-SN2N offers volumetric SR imaging of the OMM network in live cells. Part I shows the 3D color-coded volumes of the OMM network in live COS-7 cells labeled with Tom20–mCherry, captured by raw SD-SIM and RL-SN2N (Fig. 4b). Part II, OMM atlas. High-quality 3D SR data enable us to easily map and manipulate each mitochondrion. Part II demonstrates the 4D imaging of OMM network in live COS-7 cells captured by SD-SIM and RL-SN2N (Fig. 4e).

Supplementary Video 6

3D RL-SN2N on SD-SIM enables fast long-term live-cell SR imaging across 5D, recording the entire cell mitosis process. Part I shows the 5D imaging of mitochondria (green, mGold-Mito-N-7), ER (magenta, DsRed-ER) and nucleus (cyan, SPY650-DNA) in live COS-7 cells under raw SD-SIM and RL-SN2N (Fig. 4g). Part II displays the dynamics of mitochondrial (green) and ER (magenta) networks after mitosis.

Supplementary Video 7

SN2N and RL-SN2N permit long-term live-cell STED imaging. Part I shows a live-cell SR imaging gallery of live COS-7 cells labeled with SiR-Tubulin (left), LifeAct-EGFP (middle) and Sec61β–EGFP (right) captured using a commercial STED microscope (Leica) and enhanced by SN2N (Fig. 5g). Part II features long-term live-cell imaging of mitochondrial cristae in PKMO-labeled COS-7 cells, captured with a commercial STED system (Abberior) under various conditions, including high depletion power (86%) with both long (100 μs per pixel) and short (10 μs per pixel) durations, as well as low depletion power (41%) with a short duration (10 μs per pixel) (Fig. 5i–m). Part III presents a comparison of long-term live-cell SR imaging of mitochondrial cristae using STED and RL-SN2N.

Supplementary Video 8

SOFI-SN2N supports efficient SOFI reconstruction in both fixed and live cells. Part I demonstrates a representative COS-7 cell labeled with QD525, imaged with a commercial SD-confocal microscope and reconstructed using 2nd-, 3rd- and 4th-order SOFI, along with its SOFI-SN2N results across 20, 50, 100, 200, 500 and 1,000 frames (Fig. 6e,f). Part II compares the OMM structures in a live COS-7 cell labeled with Skylan-S-TOM20, captured by SD-confocal, 2nd-order SOFI with 20 frames, and its SOFI-SN2N results (Fig. 6i,j).

Supplementary Video 9

SN2N removes random, noncontinuous artifacts in SIM reconstructions. Part I shows the denoising results of SIM-SN2N (raw resampling method) on the BioSR SIM dataset under high-, medium- and low-SNR levels (Extended Data Fig. 8b–g). Part II compares the mitochondrial cristae structures in live COS-7 cells labeled with MitoTracker Green by 2D-SIM and SN2N-SIM (raw resampling method; Extended Data Fig. 8h–j).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qu, L., Zhao, S., Huang, Y. et al. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 21, 1895–1908 (2024). https://doi.org/10.1038/s41592-024-02400-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41592-024-02400-9

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing