Fig. 9: Data augmentation and Transfer learning can improve performance. | Nature Communications

Fig. 9: Data augmentation and Transfer learning can improve performance.

From: Democratising deep learning for microscopy with ZeroCostDL4Mic

Fig. 9

ac Data augmentation can improve prediction performance. YOLOv2 cell shape detection applied to bright-field time-lapse dataset. a Raw bright-field input image. b Ground-truth and YOLOv2 model predictions (after 30 epochs) with increasing amounts of data augmentation. The original dataset contained 30 images which were first augmented by vertical and horizontal mirroring and then by 90° rotations. c mAP (mean average precision) as a function of epoch number for different levels of data augmentation. d, e These panels display an example of how transfer learning using a pretrained model can lead to very high-quality StarDist prediction even after only 5 epochs. This figure also highlights that using a pretrained model, even when trained on a large dataset, can lead to inappropriate results. d Examples of StarDist segmentation results obtained using models trained using 5, 20 or 200 epochs and using a blank model (“De novo” training) or the 2D-versatile-fluo as a starting point (transfer learning). e StarDist QC metrics obtained with the models highlighted in (d) (n = 13 images). The IoU (intersection over union) scores are calculated over the whole image, while the F1 scores are calculated on a per-object basis. Results are displayed as boxplots which represent the median and the 25th and 75th percentiles (interquartile range); outliers are represented by dots96. Note that the axes of both graphs are cut. Source data for panel (c) and (e) are provided in the Source Data file.

Back to article page