Fig. 4: Image denoising and restoration networks (CARE and Noise2Void). | Nature Communications

Fig. 4: Image denoising and restoration networks (CARE and Noise2Void).

From: Democratising deep learning for microscopy with ZeroCostDL4Mic

Fig. 4

Example of data generated using ZeroCostDL4Mic CARE and Noise2Void notebooks. a, b A 3D CARE network was trained using SIM images of the actin cytoskeleton of DCIS.COM cells using fixed samples (a) to denoise live-cell imaging data (b). Quality control metrics are as follows: mSSIM: 0.74, PSNR: 26.9 and NRMSE: 0.15. c Fixed samples were imaged using SIM to obtain low signal-to-noise images (lifeact-RFP, Training Source) and matching high signal-to-noise (Phalloidin staining, Training Target) images, and this paired dataset was used to train CARE. Input, ground truth and a CARE prediction are displayed (both single Z plane and maximal projections). The QC metrics values computed directly in the CARE notebook are indicated. b The network trained in (a) was then used to restore live-cell imaging data (Supplementary Movie 6). The low SNR image (input) and the associated CARE predictions are displayed (single plane). c Movie of an ovarian carcinoma cell labelled with lifeact-RFP migrating on cell-derived matrices (labelled for fibronectin) denoised using Noise2Void. Both training source and Noise2Void predictions are displayed (Supplementary Movie 7). For each channel, a single Z stack (time point) was used to train noise2Void, and the resulting model was applied to the rest of the movie. d Movie of a glioma cell endogenously labelled for paxillin-GFP, migrating on 9.6 kPa polyacrylamide hydrogel, and imaged using an SDC. Both training source and Noise2Void prediction are displayed (Supplementary Movie 9). A single image (time point) was used to train Noise2Void, and the resulting model was applied to the rest of the movie. For all panels, yellow squares highlight a region of interest that is magnified.

Back to article page