Extended Data Fig. 5: Results from individual monkeys and various control conditions. | Nature Neuroscience

Extended Data Fig. 5: Results from individual monkeys and various control conditions.

From: Behavioral read-out from population value signals in primate orbitofrontal cortex

Extended Data Fig. 5

(a-h) Choice decoding and neuron dropping analyses for individual monkeys. Same format and conventions as Fig. 3. In panels (c) and (g), p-values indicate significant differences (uncorrected). For boxplots, thick lines and boxes show the median and IQR; whisker minima/maxima are up to three times the IQR. (i) The same data used to generate Fig. 3a,b, except time-locked to the decision RT, defined as the lift of the center lever. Note that when the data are aligned to the RT the first and second target viewing times (black and blue boxplots, respectively) are widely spread in time and have substantial overlap. This overlap creates an artifact in which the area under the ROC curve for the ‘1st value’ decoder briefly goes below 0.5 (whereas in Fig. 3a,b it is always above 0.5). This is because within this brief time window, the encoding subspaces for ‘1st value’ and ‘2nd value’ are aligned such that neural activity related to ‘2nd value’ projects onto the ‘1st value’ subspace. In other words, the ‘1st value’ decoder becomes contaminated by spiking activity related to the second offer value, resulting in predictions driven by the value of the second offer (that is that are below 0.5). Note that the subspace alignment and resulting contamination arises only because of the large spread between the first and second target viewing times when the data are RT locked. When the data are locked onto target viewing, the decoder weights for ‘1st value’ and ‘2nd value’ are uncorrelated (Fig. 4a), indicating orthogonal decoding subspaces. (j) Choice decoding and neuron dropping analyses when using non-condition-normalized data to calculate the area under the ROC curve37. In this method, trials were grouped according to 66 unique conditions, defined as the combination of the first and second target identities. The non-condition-normalized data (\({\mathbf{\hat{Y}}}\) from equation (3)) were then used to calculate a separate AUC within each condition. Finally, within each session the AUCs were averaged across conditions to compute the session-wise AUC. AUCs could not be calculated for conditions without at least one trial of each choice outcome (first or second offer chosen); because there were 66 unique conditions, there were often too few trials per condition, and as a result ~65% of test trials were discarded. For this reason, the main analysis uses normalized decodes, so that data from all eligible test trials can contribute to the AUC calculation. All conventions are the same as in Fig. 3a,b. (k): Choice decoding as in Fig. 3a,b was performed using only those test trials in which the offers were equal in value (mean 43.9 trials in 16 sessions for monkey K and 51.7 trials in 16 sessions for monkey C). Filled significance indicators: corrected p < 0.05 compared to 0.5; open indicators: uncorrected P < 0.05 compared to 0.5. Otherwise, conventions are the same as in Fig. 3a,b.

Back to article page