Extended Data Fig. 8: RLN’s performance on volumes with very noisy input. | Nature Methods

Extended Data Fig. 8: RLN’s performance on volumes with very noisy input.

From: Incorporating the image formation process into deep learning improves network performance

Extended Data Fig. 8

a) An example of the failure of RLN when challenged with very noisy input. RLN was trained with noisy (SNR 2.45 dB) synthetic data (Supplementary Fig. 1) and tested on similar structures. Lateral (XY) and axial (ZY) views are shown. RLN results show obvious artifacts. b-f) Comparison of RLD and RLN on super-resolved, low-SNR data from a live U2OS cell expressing ERmoxGFP, acquired with iSIM. b) Low SNR input, XY view. c) RLN output with one step training, that is, applying a model trained with low SNR raw input and high SNR deconvolved ground truth. d) RLN output with two-step deep learning, by first applying a denoising RCAN model, then applying an RLN model to deconvolve the output from the first step. The RCAN model was based on pairs of low/high SNR raw data, the RLN model was based on pairs of high SNR raw data and high SNR deconvolution data. e) Higher magnification of red rectangle in d), comparing raw input, high SNR deconvolved ground truth, RLD on low SNR raw input, one-step RLN, one-step RCAN (same training data as with one-step RLN, that is, input is the low SNR raw input and the high SNR deconvolved result is the ground truth), and two-step deep learning with RCAN for denoising and RLN for deconvolution. Insets show Fourier transforms of the data. f) Quantitative analysis with PSNR and SSIM for the raw input, RLD, one-step RLN, one-step RCAN, and two-step RCAN + RLN result, open circles, means, and standard deviations are obtained from N = 6 volumes. Both one-step RCAN and one-step RLN outperform RLD, and the two-step methods further boosts resolution and contrast, indicated by the Fourier spectra shown in the inserts. Scale bars: a) 10 pixels, b-d) 10 μm, e) 2 μm.

Back to article page