Fig. 5: The effect of input statistics on network sensitivity can be understood with linear network models. | Nature Communications

Fig. 5: The effect of input statistics on network sensitivity can be understood with linear network models.

From: Efficient neural codes naturally emerge through gradient descent learning

Fig. 5

Despite their simplicity, these show human-like learning phenomena. a We trained linear networks to reconstruct black and white patches of natural images. b The statistics of natural images can be analyzed with Principal Components Analysis (PCA); the variance of each successive PC decreases with a characteristic power law decay. c When learning with gradient descent, the weight matrix W learns each PC separately and in order of their variance. The sharpness of the sigmoidal learning curve is controlled by the network depth (SI Fig. 2)d Human perceptual learning curves are also sigmoidal, and increasing task difficulty delays learning dynamics. Data replotted from ref. 49; subjects trained to detect the orientation of a line, and the difficulty of the task was controlled by a masking stimulus. ei Paradigm for measuring the sensitivity to spatial frequency of W. f Every 50 learning steps we plotted the inverse square root of the sensitivity to spatial frequency, which is a proxy for detection thresholds. At each step note the linear increase above an elbow frequency. g Human data on spatial frequency thresholds, replotted from ref. 44. h An artificial spatial ‘acuity’ grows nearly linearly with training; ‘acuity’ is defined as the maximum spatial frequency for which the artificial threshold is below a value of 0.1. i In infants and children, the spatial acuity - the highest spatial frequency observable for high-contrast gratings - also increases linearly with age. Replotted from ref. 12, with error bars representing ± SEM over n = 4–10 subjects (varying per point, exact number not reported).

Back to article page