Extended Data Fig. 7: Deep Learning-based image analysis.
From: Automated high-speed 3D imaging of organoid cultures with multi-scale phenotypic quantification

a. Schematic description of the deep learning-based analysis pipeline for mitosis and apoptosis detection using YOLOv5 CNN published by ZeroCostDL4Mic with default settings, using a NVIDIA Quadro RTX6000 24GB. The ground truth consisted of a training dataset of mitosis and apoptosis manually pinpointing by drawing bounding boxes around the central plane of the apoptosis (Blue) and of the mitosis (Red). A total of 344 mitosis and 1,123 apoptosis were manually picked for the training, augmented to a final training set of 4,128 mitosis and 13,476 apoptosis (See Methods). We used 90% of this dataset for the training and 10% for validation of the network. The AI detection results (Prediction) showed an accuracy of 89% as compared to the ground truth. The prediction was obtained by processing each z image plane independently, and post-processed to obtain the 3D assignment of all events (See Methods). b. Illustration of the CNN architecture used for the whole organoid shape classification, corresponding to a 3D adaptation of the Densenet121 model2. It resulted in a 99% accuracy as compared to the ground truth for the classification of organoids in ‘B-shape’ and ‘O-shape’ (representative images). c. Illustration of Features Pyramid Network (FPN) architecture with successive Resblocks in the backbone (C0-C5) of the bottom-up pathway (blue). The remaining features maps (P0-P5) were generated by a combination of convolution operation applied to the feature maps of the bottom up pathway and up-sampling transformation applied on each level of the pyramid (green). The probability score and segmentation output (streaks outline) was given by applying softmax layer on P0.