Fig. 3: Incorporating photoreceptor adaptation improves CNN performance in predicting RGC responses to naturalistic movies. | Nature Communications

Fig. 3: Incorporating photoreceptor adaptation improves CNN performance in predicting RGC responses to naturalistic movies.

From: Biophysical neural adaptation mechanisms enable artificial neural networks to capture dynamic retinal computation

Fig. 3

a Comparison between conventional CNN and photoreceptor–CNN model with the photoreceptor layer parameters trained with the downstream CNN. Y-axis shows the performance of conventional CNN (left) and photoreceptor–CNN model (right) as FEV values for each RGC (circles). Light gray circles denote ON type RGCs (N = 27), and dark gray circles denote OFF type RGCs (N = 30). Connecting lines link the FEV values for each RGC across models. Median FEV values across all RGCs (N = 57) are indicated by red lines, and stated as FEV ± 95%c.i. in red text at the top. P-values were calculated by performing paired two-sided Wilcoxon signed-rank test on the FEV distributions from the CNN and photoreceptor–CNN model. An asterisk indicates statistically significant difference (p < 0.01) between performance of the two models. b Similar comparisons as in (a) but between conventional CNN (same as a, left) and photoreceptor–CNN, with the biophysical photoreceptor model being replaced by a linear empirical photoreceptor model. c Similar comparison as in a between photoreceptor–CNN models with the biophysical photoreceptor layer parameters fixed to experimental fits to primate rods (left; Supplementary Table 2) and where the photoreceptor layer parameters were learned along with the downstream CNN (right; same as a, right). Source data are provided as a Source Data file.

Back to article page