Fig. 5: Estimation of neural coding similarity across conditions.
From: Interacting with volatile environments stabilizes hidden-state inference and its brain signatures

a Graphical description of the coding similarity estimation procedure. Left: a population of linear coding units \(z_{i,t}\) represents an input scalar variable xt by a fraction of “selective’ units (\(z_{i,t} = \beta _i\,x_t + \varepsilon _{i,t}\) where \(\beta _i \ne 0\)) with background noise \(\varepsilon _{i,t}\) of SD \(\sigma\). A linear decoder is applied to compute an estimate \(\hat x_t\) of the input variable from population activity. Right: the same population of coding units represents the same input variable x by partially overlapping selective units \(z\) in the cue-based (left) and outcome-based (right) conditions. Computing coding precision within each condition (1–2) and across conditions (3–4, marked by gen., by using the coding weights w estimated in one condition to compute neural predictions in the other condition) allows to quantify the degree of similarity (overlap) between selective units in the two conditions. b Estimated coding similarities for stimulus orientation (left), stimulus change (middle), and stimulus evidence (right). Bars and error bars indicate jackknifed means ± SEM (n = 24 participants). Dots show predicted values obtained by simulating the population of coding units with best-fitting estimates of similarity and background noise. Bar fillings indicate the condition in which neural predictions are computed. Bar outlines indicate the condition in which coding weights are estimated. This procedure indicates near-perfect coding similarity between cue-based and outcome-based conditions for stimulus orientation (left, jackknifed mean: 100%), stimulus change (center, 93%), and stimulus evidence (right, 100%). Source data are provided as a Source Data file.