Fig. 7: Comprehensive analysis of ultraFedFM highlights its performance, scalability, and robustness.
From: From pretraining to privacy: federated ultrasound foundation model with self-supervised learning

a Comparison of the prediction distribution between UltraFedFM and USFM across five independent segmentation tasks. UltraFedFM’s predictions are concentrated within a high dice similarity coefficient range (mean μ = 0.857, standard deviation σ = 0.103), whereas USFM’s predictions show greater dispersion (mean μ = 0.808, standard deviation σ = 0.174) b Prediction distribution of all methods on organ-agnostic segmentation tasks. c The prediction stability of UltraFedFM under different ratios of input distribution variability. d The ablation study of proposed framework components. e The scaling effect of pre-training data, evaluated with different proportions of pre-training data. f The scaling effect of pre-training model size, evaluated using different ViT architectures (ViT-Base, ViT-Large, and ViT-Huge). g Performance impact of different self-supervised learning strategies on classification and segmentation tasks. All results are scaled and normalized relative to UltraFedFM. Specific quantitative results are available in Supplementary Table 6.