Fig. 2 | Scientific Reports

Fig. 2

From: Orthogonal neural representations support perceptual judgments of natural stimuli

Fig. 2

Object position decoding from V4 population responses is consistent across background variations. (a) We can linearly decode object position for each background stimulus (for the example session shown here). Each panel represents a unique configuration of background rotation and depth, with rows representing variations in rotation and columns representing variations in depth. Each gray point shows the decoded position for a single image presentation in this session. These points depict the actual object position (x-axis, in visual degrees relative to the center of the image) and the decoded position (y-axis) using a separate, cross-validated linear decoder for each unique background. The open circles represent the trial-averaged predicted position (vertical length is the standard deviation). The number in the top-left is the correlation between the actual and decoded positions and the yellow dashed line is a linear fit (compared to null hypothesis that the correlation is 0 or ‘constant model’, the p-values of tests to reject this hypothesis range from 2.5 × 10−15 to 1.4 × 10−6 for this session). The dark gray to yellow gradient of the open circles is a redundant cue in the figure that also conveys stimulus position variation. The gray dashed line represents the identity. (b) Position decoding is largely consistent across background variations. This plot is in the same format as those in A. Here, a condition-general decoder that ignores variations in the background and incorporates all stimulus presentations is used (r = 0.73; vs. constant model, p < 0.001). We also computed the general decoder using the minimum number of presentations across all 25 specific decoders in A (60 trials) for 100 folds and found similar results (r = 0.72; vs. constant model, p < 0.001). Compare with Supp. Fig. 2a and b for background rotation and depth decoding. (c) Distribution of specific decoder accuracies (correlation) across all sessions for each monkey (each session contributed 25 values to the histogram). Blue and red arrows represent the median accuracy (0.662 for monkey 1, 0.703 for monkey 2). The box plots above the histograms summarize general decoder accuracy across sessions for each monkey. The central line in each box plot indicates the median (0.735 for monkey 1, 0.724 for monkey 2), box edges indicate 25 and 75 percentiles, whiskers indicate minimum and maximum values, and + symbols indicate outliers. We found similar results for the trial count matched general decoders (median 0.732 for monkey 1, 0.722 for monkey 2). Compare with Supp. Fig. 2c and d for background rotation and depth decoding. (d) Error in decoding object position (across background variations) for each trial compared with the error in decoding background rotation. The correlation between the two types of errors is very small (r=−0.083), although it is statistically significant (p = 0.001). See Supp. Fig. 2e for comparison with error in trial-wise background depth decoding.

Back to article page