Extended Data Fig. 1: Effects of changing the evaluation timestep and number of recurrent units. | Nature Machine Intelligence

Extended Data Fig. 1: Effects of changing the evaluation timestep and number of recurrent units.

From: Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion

Extended Data Fig. 1: Effects of changing the evaluation timestep and number of recurrent units.

Strongly chaotic (red) and edge-of-chaos (cyan) networks are trained to classify high-dimensional inputs. Details are as in Fig. 2e. Shaded regions are as defined in Fig. 2e. First row: network trained with a categorical cross-entropy loss with a learning rate of 1e-4. Second row: network trained with a mean squared error loss with a learning rate of 1e-3. First column: evaluation time is t = 6. Second column: evaluation time is t = 10. Third column: evaluation time is t = 14. Fourth column: Number of hidden neurons is increased to N = 300. Evaluation time is t = 14.

Back to article page