Fig. 2: A non-linear, convolutional Neural Network model predicts the shape of the LSTAs. | Nature Communications

Fig. 2: A non-linear, convolutional Neural Network model predicts the shape of the LSTAs.

From: Context-dependent selectivity to natural images in the retina

Fig. 2

a Schematic of the different architectures used to predict the responses of multiple retinal ganglion cells to a flashed image (top). The LN model (middle) is composed by a linear filter followed by a nonlinear function. The CNN model (bottom) is composed of a convolutional layer (inferred kernels) and a dense layer (readout weights factorized in spatial masks and feature weights). b Average performance of the two models at predicting the average responses to repeated, unperturbed natural images (see Methods and Supplementary Fig. 4 for a scatter plot of this data across all modelled cells). The data reported here is from N = 12 (mouse) and N = 7 (axolotl) cells that were both modelled and showed polarity inversion. In both species the CNN significantly outperforms the LN (p = 1 x \({10}^{-3}\) for mouse and p = 1 x \({10}^{-2}\) for axolotl, two-sided Wilcoxon signed-rank test). c, d Two example LSTAs (second column), for mouse (c) and axolotl (d), measured for two example cells (same as Fig. 1) and different reference images (first column), along with the prediction of the two models (third and fourth column for LN and CNN models, respectively). e Average performance of the two models at predicting the LSTAs of inverting cells (see Methods). The data reported here is from N = 57 (mouse) and N = 26 (axolotl) measured LSTAs. Again in both species the CNN significantly outperforms the LN (p = 1 x \({10}^{-9}\) for mouse and p = 5 x \({10}^{-2}\) for axolotl, two-sided Wilcoxon signed-rank test). Bar plots are presented as mean and SEM. Source data are provided as a Source Data file. Credit for the natural images shown here goes to Hans Van Hateren: http://bethgelab.org/datasets/vanhateren/.

Back to article page