Fig. 5: Examples of how to improve the predictive performance of our model using prediction uncertainty. | npj Digital Medicine

Fig. 5: Examples of how to improve the predictive performance of our model using prediction uncertainty.

From: Deep Gaussian process with uncertainty estimation for microsatellite instability and immunotherapy response prediction from histology

Fig. 5: Examples of how to improve the predictive performance of our model using prediction uncertainty.

In this model, the aggregated version of MSI-SEER was trained using Yonsei-1 in colorectal cancer or the combined data from TCGA-STAD and Yonsei-Classic in gastric cancer. a All test datasets in colorectal cancer, except the training data Yonsei-1, were combined and tested. The numbers of WSIs classified correctly are green, and those classified incorrectly are orange. The predictive uncertainty as measured by the Bayesian confidence scores are shown. b The changes in the prediction performance (in terms of area under the curve (AUC)) when the predictions are discarded at increasing rates, i.e. #discarded WSIs/#total WSIs, in each data cohort. The red line represents the change in the performance when the most uncertain predictions (as measured by the BCSs obtained by our model) are discarded while the black line is the average change in the performance where the predictions are randomly discarded 1000 times at each rate. c All the test datasets in gastric cancer, except the training data TCGA-STAD and Yonsei-1, were combined and tested. The correctness of classification for gastric cancer datasets is shown. d The change in prediction performance is shown for the gastric cancer test datasets.

Back to article page