Fig. 3: Supervised fine-tuning results. | Nature Communications

Fig. 3: Supervised fine-tuning results.

From: VOLTA: an enVironment-aware cOntrastive ceLl represenTation leArning for histopathology

Fig. 3: Supervised fine-tuning results.

After pre-training using our self-supervised framework, a fully-connected layer (single- or double-layer) was added to the end of the backbone (the model generating the cell representations), and they were fine-tuned using the labeled data. We compared fine-tuning with both frozen and unfrozen backbone (a - CoNSeP and b - NuCLS). To account for the color differences in the train and test cohorts of the NuCLS dataset, we also performed the Vahedain color normalization before the fine-tuning process, which showed a significant boost compared to the unnormalized approach (c). The results demonstrate that our fine-tuned model can achieve the same performance as the supervised baselines (HoVer-Net and NuCLS) using only 20% of the labeled data while outperforming these baselines with the full set of the labeled data (a and c) (Source data are provided as a Source Data file).

Back to article page