Extended Data Fig. 5: Entropy coding transforms response levels and reduces average firing rates. | Nature Neuroscience

Extended Data Fig. 5: Entropy coding transforms response levels and reduces average firing rates.

From: Efficient and adaptive sensory codes

Extended Data Fig. 5

a-c, Entropy coding reassigns response levels to spike counts based on the predicted probability that a response level will be used. a, The encoding nonlinearity partitions stimuli drawn from the actual stimulus distribution (black) or predicted stimulus distribution (red) into discrete response levels. This partitioning determines the predicted (b, left) and actual (c, left) probability with which response levels will be used. b, Entropy coding reassigns spike counts in order of decreasing predicted probability (left); response levels that have higher predicted probability (darker red) will be assigned fewer spikes (middle). This leads to a recoded histogram that is weighted toward lower spike counts (right). The reassignment could be approximated by a quadratic nonlinearity, or by a thresholding exponential nonlinearity. c, The recoding scheme, which is based on predicted probability (b), determines how stimuli sampled from the actual stimulus distribution will be transformed into spike counts. d, Before recoding (left), firings rates do not change in response to a change between low (L) and high (H) stimulus variance, regardless of the value of αI. After recoding (right), all codes show a transient response to both increases and decreases in variance, and time-averaged firing rates are lower (inset). e, Before recoding (left), firing rates increase following a change from low (L) to high (H) stimulus mean, and decrease following a change from high to low mean. After recoding (right), all codes show a symmetric response to increases and decreases in mean, and time-averaged firing rates are lower (inset).

Back to article page