Fig. 7: Population decoding of place using a linear SVM classifier. | Nature Communications

Fig. 7: Population decoding of place using a linear SVM classifier.

From: Primacy of vision shapes behavioral strategies and neural substrates of spatial navigation in marmoset hippocampus

Fig. 7

a, d Top: 3D diagram of the binned place locations used to decode the subject’s position, bottom: blue and pink lines correspond to a Naka–Rushton function fit to the mean decoding accuracy (y-axis) as a function of ensemble size (number of neurons, x-axis), shaded area corresponds to 95% confidence intervals. Blue solid lines correspond to the best ensemble constructed from a pool of all recorded putative pyramidal neurons. Pink solid lines correspond to the best ensemble constructed from a pool of non significantly selective cells (as per GAM encoding analysis). R2 goodness of fit value is reported. The cyan lines correspond to the mean decoding accuracy of a randomized combination of neurons (100 iterations), shaded area corresponds to 95% confidence intervals. The gray dashed lines correspond to chance decoding accuracy (1/4, 0.25). b, e Best-encoded variable proportion (as per GAM encoding analyses) for the combination of n = 20 neurons part of the best ensemble pooled from all the single units (a, d; blue line). c, f Confusion matrix derived from the best ensemble classification accuracy (a, d; blue line).

Back to article page