Fig. 4: TockyConvNet: Deep learning-based analysis via image conversion and gradient mapping. | Nature Communications

Fig. 4: TockyConvNet: Deep learning-based analysis via image conversion and gradient mapping.

From: Machine learning-assisted decoding of temporal transcriptional dynamics via fluorescent timer

Fig. 4

a Schematic of the image conversion process applied to Timer fluorescence data. b Representative dot plots (left) and pseudocolour images (right) after conversion. c Architecture of the TockyConvNet model, comprising four convolutional layers used for Gradient-weighted Class Activation Mapping (Grad-CAM) shown in g, h. d Learning curve from three-fold cross-validation. Receiver Operator Characteristics (ROC) e and Precision–Recall f analyses for benchmarking TockyConvNet with manual gating strategies. Differential Grad-CAM heatmaps for WT vs. CNS2 KO samples across convolutional layers, shown in Timer Angle-Intensity g and Timer Blue-Red h spaces. Colour range is normalised per panel. Violin plots showing kernel density estimates of CNS2 feature cell percentages in WT (i; top 90th percentile) and KO (j; bottom 10th percentile) based on differential Grad-CAM maps. Each point represents a biological replicate. Statistical analysis used the two-sided Mann–Whitney test; **p  <  0.01, ****p  <  0.001. The box shows the interquartile range (25th–75th percentile), centre line indicates the median, and whiskers extend to the most extreme values within 1.5× IQR. n  =  22 KO and 27 WT samples. Exact p-values are in Supplementary Data 1. k Violin plots of mean fluorescence intensity (MFI) for indicated markers in WT feature cells, other Timer-positive and -negative cells in WT samples (n  =  27). Statistical significance was assessed using the Kruskal-Wallis test with Dunn’s post-hoc test (Bonferroni correction); **p  <  0.01, ***p  <  0.005, ****p  <  0.001. Timer-negative cells were included for reference only and not in statistical testing. Box and whisker definitions as above. Exact p-values are in Supplementary Data 1.

Back to article page