Fig. 1: Model mimicry: many models produce the same pattern of results. | Nature Communications

Fig. 1: Model mimicry: many models produce the same pattern of results.

From: Model mimicry limits conclusions about neural tuning and can mistakenly imply unlikely priors

Fig. 1

In generative forward modeling, EEG data are simulated from models that use different sets of orientation tuning functions (top row). Decoding results (mean-centered decoding accuracy, mean-centered precision, and bias) as a function of orientation are shown (3 bottom rows) for simulations using different underlying example models. Blue error areas are 95% confidence intervals of the mean of the simulated instances (n = 36) of each model for each decoding metric. A Preferred tuning model: Tuning functions are unevenly spaced along the orientation space, with more clustering at vertical, and even more at horizontal orientations. This is the best fitting model from Harrison and colleagues1. B Width model: Tuning curve widths are uneven, with narrowest tuning for obliques, wider tuning for vertical and widest tuning for horizontal. C Gain model: Uneven tuning curve gain across orientations space, with more gain at cardinals that is highest for horizontal orientations. D Signal-to-noise (SNR) model: Tuning curves are uniform, but signal strength is orientation specific. Source data are provided as a Source Data file.

Back to article page