Fig. 5
From: Orthogonal neural representations support perceptual judgments of natural stimuli

Orthogonal representations for variations in up to 10 object and background features. We generated a large image set where the color, luminance, position, rotation, and depth of both the background and object each took one of two values yielding 210 = 1,024 images. We collected V4 population responses to these images as in Supp. Fig. 1c. (a) Example images illustrating object and background parameter variation. (b) If two features are encoded orthogonally (independently) in neural population space, then a decoder trained on one feature should not support decoding of the other feature. We trained linear decoders of V4 responses for each object and background feature (x-axis) and tested the ability to decode each of the other features. The diagonal entries provide, for each feature, the correlation between the decoded and actual feature parameter values for a decoder trained on that feature. Correlations were obtained through cross-validation. Decoding performance was above chance (median correlation of 0.61) for all features (p < 0.0001; t-test across folds). The off-diagonal values depict the performance of a decoder trained on one feature (x-axis) for decoding another (y-axis). This cross-decoding performance was not distinguishable from chance (median correlation of 0.008) except in the case of the color of the object and background (r = 0.36).