Fig. 2: MoDL’s first key pipeline: high-quality and accurate segmentation of mitochondrial fluorescence images.
From: Mitochondrial segmentation and function prediction in live-cell images with deep learning

a Schematic diagram showing super resolution-structured illumination microscopy (SR-SIM) imaging and the key steps of MoDL in mitochondrial segmentation (created with BioRender.com). b Segmentation performance (Dice coefficient, mIoU, and PA) comparison of MoDL (red dot), open-source deep learning algorithms that are retrained with SR dataset, i.e., U-Net30, MitoSegNet20, Stardist17, and Otsu’s threshold-based Mitometer (green, cyan, blue, and orange dots, respectively)15. n = 200 images (512 × 512 pixel2 resolution). c The segmentation results by MoDL. The original images were first predicted into binary masks of segmentation images and then multiplied by the fluorescence intensity in the original to obtain a pseudo-color image that reflects individual mitochondrion fluorescence intensity. n = 30 images (2048 × 2048 pixel2 resolution), scale bar = 5 μm. Zoomed-in the original and pseudo-color images of mitochondria in two separate cells (i and ii), scale bar = 1 μm. d A comparison between the ground truth and the results of five segmentation algorithms (512 × 512 pixel2 resolution). e Pseudo-color images (1024 × 1024 pixel2 resolution) generated by MoDL for mitochondrial segmentation in cell lines that are included (HeLa, 143B, L02) and excluded (PC9, A549, PC12) in the dataset, scale bar = 2 μm. The Dice coefficient and relative fluorescence intensity of original image were assigned in the pseudo-color. n = 28 (HeLa), 20 (143B), 27 (L02), 13 (PC9), 22 (A549), 16 (PC12) images (512 × 512 pixel2 resolution). f Fluorescence imaging data obtained from SIM and confocal under the same viewing field, scale bar = 2 μm. g Quantitative assessment of mitochondrial morphological features (mean area, form factor, branch length) between SIM and confocal images employing MoDL (blue dot) and an adaptive threshold method (orange dot). n = 60 images (512 × 512 pixel2 resolution). Data are given as the mean ± SD (b, g). Statistical differences were calculated using a one-way ANOVA followed by Dunnett’s multiple comparison test, P < 0.0001 vs. MoDL (b), or a two-tailed Student’s t-test (g). No significant (ns, P > 0.05). Source data are provided as a Source Data file.