Fig. 8: Quality control of trained models.
From: Democratising deep learning for microscopy with ZeroCostDL4Mic

a Overfitting models: Graphs showing training loss and validation loss curves of a CARE network with different hyperparameters. The upper panel shows a good fit of the model to unseen (validation) data (main training parameters, number_of_epochs: 100, patch_size: 256, number_of_patches: 10, Use_Default_Advanced_Parameters: enabled), the lower panel shows an example of a model that overfits the training dataset (main training parameters, number_of_epochs: 100, patch_size: 80, number_of_patches: 200, Use_Default_Advanced_Parameters: enabled). b RSE (root-squared error) and SSIM (structural similarity index) maps: An example of quality control for CARE denoising model performance. The quality control metrics values computed directly in the notebook are as follows: mSSIM (mean structural similarity index): 0.56 and NRMSE (normalised root-mean-squared error): 0.18 for target vs source and mSSIM: 0.90 and NRMSE: 0.10 for target vs prediction. c IoU (intersection over union) maps: An example of quality control metrics for a StarDist segmentation result, where IoU: 0.93 and F1: 0.97. d Precision–recall (p–r) curves: p–r curves for the dataset shown in Supplementary Fig. 13, highlighting the effect of augmentation on the performance metrics of the YOLOv2 model, where AP (average precision) for elongated improved from 0.53 to 0.84 upon 8× augmentation while F1 improves from 0.62 to 0.85.