Fig. 2: Comparison of locally and federated trained UNets and transformer-based model (SWIN-UNETR) in boxplots. | npj Digital Medicine

Fig. 2: Comparison of locally and federated trained UNets and transformer-based model (SWIN-UNETR) in boxplots.

From: Real world federated learning with a knowledge distilled transformer for cardiac CT imaging

Fig. 2

Comparison of UNet and transformer-based model (SWIN-UNETR) for local, federated, and federated KD training for a) Hinge Points & Coronary Arteries (HPS & CAs), b) Memebranous Septum (MS), and c) Calcification. Test results on training clients are shown in blue, the results on independent test clients is shown in orange. In the boxplots median, 25th and 75th quartile, as well as outliers are shown. The locally trained models perform well on their locations’s respective data, but do not generalize to the data from other locations. The transformer-based architecture performs worse than the Unet. The generalization performance can be enhanced with federated training, but the UNet still performs and generalizes better. After performing federated KD and subsequent finetuning the performance of the transformer-based model is on par with the UNet on detecting the hinge points, coronary ostia, and membranous septum, while outperforming it on segmenting the calcification. While the predictive performance of the SWIN-UNETR can be enhanced with more training samples due to KD to be better or on par with the UNet architecture, KD does not enhance the performance of the UNet to a similar degree.

Back to article page