Fig. 9: Architectures of the simulated networks. | Nature Communications

Fig. 9: Architectures of the simulated networks.

From: Backpropagation through space, time and the brain

Fig. 9

a The GLE network used to produce the results in Fig. 8e, where multiple input channels project to successive hidden layers with different time constants. All other networks can be viewed as using a subset of this architecture. b The GLE network trained on the MNIST-1D dataset uses a single (scalar) input channel. This architecture was used to produce the results in Figs. 8d, 10a−c. c “LagNet” architecture used to produce the results in Fig. 10f. It also receives a single input channel, but the weights of the four bottom layers are fixed to identity matrices. This induces ten parallel channels that process the input with different time constants. The MLP on top of this LagNet uses instantaneous neurons and is trained with GLE (which, in the case of equal time constants τm = τr reduces to LE as described in ref. 17). d The GLE network used to tackle spatial problems as per Fig. 10g. All neurons are instantaneous, and the network is equivalent to a LE network.

Back to article page