Fig. 5: OOD input detection via Gaussian processes.
From: Multi-layered maps of neuropil with segmentation-guided contrastive learning

a, We handled OOD inputs by computing prediction uncertainties alongside class labels, and calibrated the uncertainties to reflect the distance between each test example and the training distribution. b, To evaluate OOD detection, we trained classifiers on glial cell type labels, and then evaluated the classifiers on a 50–50 split between glial and OOD neuronal cell types. c, UMAP of locally aggregated embeddings (radius 10 μm) from the human cortical dataset, colored by ground truth-labeled cell type. d, Confusion matrix for a ResNet-2 classifier trained on only the four glia types, with OOD neuronal examples mixed in at test time. e, As in c with the UMAP embeddings now colored by their SNGP uncertainty. The colormap transitions from blue to red at the threshold level used to reject OOD examples in our experiment. f, Confusion matrix for the SNGP-ResNet-2, assembled from 20-fold cross-validations. Examples that exceed the uncertainty threshold are now treated as their own OOD predicted class. g, Spatial distribution of local uncertainty over an unproofread segment that suffers from reconstruction merge errors between the central OPC glia and several neuronal fragments. The uncertainty signal distinguishes the merged neurites (red, high uncertainty or OOD) from the glia cell (blue, low uncertainty) with a spatial resolution of approximately the embedding aggregation distance. Scale bar, 25 μm. See Fig. 3 for cell type abbreviations.