Fig. 3: LodeSTAR analysis of images with multiple objects. | Nature Communications

Fig. 3: LodeSTAR analysis of images with multiple objects.

From: Single-shot self-supervised object detection in microscopy

Fig. 3

a Example image with multiple objects to be detected by LodeSTAR (LodeSTAR is still trained on a single object image, as described in Fig. 1). b LodeSTAR returns clustered predictions of object positions and (c) a weight map representing the likelihood of finding a object near each pixel. d An estimation of the local density of object detections is multiplied by the weight map to obtain a detection map, whose local maxima are considered object detections (orange markers in e). fi Examples of applications of LodeSTAR to experimental data that present different challenges. In all cases, LodeSTAR is trained on the single crop shown in the respective inset and then applied to the whole time-series. See also the corresponding Supplementary Movies 26. f LodeSTAR finds the positions of mouse hematopoietic stem cells (red markers), achieving an F1 score of 0.98 rate despite the dense sample (data from12). g LodeSTAR identifies human hepatocarcinoma-derived cells (red circles), achieving an F1 score of 0.97, despite the high variability between cells (data from12). h LodeSTAR detects pancreatic stem cells (red markers), an F1 score of 0.95, despite the densely packed sample and the high variability between cells (data from12). i LodeSTAR detects the plankton Noctiluca scintillans. In this case, LodeSTAR detects the optically dense area of the tentacle attachment point (red circles). j Interestingly, if the data is downsampled by a factor of 3 (so that the training image is 50 px × 50 px instead of 150 px × 150 px) before training and evaluation, the model finds the cell as a whole.

Back to article page