Fig. 2: Evaluation positioning accuracy of object detection methods.
From: Single-shot self-supervised object detection in microscopy

The root mean squared error of the position accuracy for six methods, a CNN with a dense top 42, YOLOv425, SoCo18, a segmentation CNN7, LodeSTAR* (which is the architecture of LodeSTAR trained in a supervised manner), and LodeSTAR. Each model was trained according to the recommendations of the corresponding paper. The performance of the model was evaluated on a separate validation set of 1000 images during training, to ensure that the model did not under- or overtrain. We evaluate these over a range of sizes of training sets, from 1 datapoint to 1000 datapoints, on five shapes: a a point particle, (b) a spherical particle, (c) an annulus, (d) an ellipse, and (e) a crescent moon shape. LodeSTAR outperforms all other methods at all sizes of training sets. In fact, LodeSTAR reaches optimal performance using just one datapoint for training.