Extended Data Fig. 3: The F1 score of SUNS was robust to moderate variation of training and post-processing parameters. | Nature Machine Intelligence

Extended Data Fig. 3: The F1 score of SUNS was robust to moderate variation of training and post-processing parameters.

From: Segmentation of neurons from fluorescence calcium recordings beyond real time

Extended Data Fig. 3

We tested if the accuracy of SUNS when analyzing the ABO 275 μm dataset within the ten-fold leave-one-out cross-validation relied on intricate tuning of the algorithm’s hyperparameters. The evaluated training parameters included (a) the threshold of the SNR video (thSNR) and (b) the training batch size. The evaluated post-processing parameters included (c) the threshold of probability map (thprob), (d) the minimum neuron area (tharea), (e) the threshold of COM distance (thCOM), and (f) the minimum number of consecutive frames (thframe). The solid blue lines are the average F1 scores, and the shaded regions are mean ± one s.d. When evaluating the post-processing parameters in (c-f), we fixed each parameter under investigation at the given values and simultaneously optimized the F1 score over the other parameters. Variations in these hyperparameters produced only small variations in the F1 performance. The orange lines show the F1 score (solid) ± one s.d. (dashed) when we optimized all four post-processing parameters simultaneously. The similarity between the F1 scores on the blue lines and the scores on the orange lines suggest that optimizing for three or four parameters simultaneously achieved similar optimized performance. Moreover, the relatively consistent F1 scores on the blue lines suggest that our algorithm did not rely on intricate hyperparameter tuning.

Back to article page