Fig. 2: Validating the training of seRNNs. | Nature Machine Intelligence

Fig. 2: Validating the training of seRNNs.

From: Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Fig. 2

a, The validation accuracy of all converging neural networks is shown across L1 RNNs (n = 479, blue, for all plots) and seRNNs (n = 390, pink, for all plots), showing that equivalent performance is achieved on the one-step inference task. For all plots, error bars correspond to two standard errors. b, At the same time, both groups of networks show a general trend of weakening the weights in their recurrent layer, showing that the overall regularization is working in both groups of networks. c, As a result of their unique regularization function, seRNNs have a negative correlation between weight and Euclidean distance over the course of epochs/training, but in L1 networks there is no relationship between weights and Euclidean distance. d, The regularization function of seRNNs also successfully influences the topology of networks to prefer topologically central weights over topologically peripheral weights, as shown by lower weighted communicability values. e, Left: an example of a representative seRNN network in the 3D space in which it was trained. The size of the nodes reflects their node strength. This network was taken from epoch 9 at a regularization of 0.08 and is the network used for visualizations for the rest of this paper. Middle: we show the negative relationship between the connection weights of seRNN versus the Euclidean distances of the connections. Pearson’s correlation coefficient is provided, with the corresponding P value (P = 7.03 × 10−7). No adjustments were required for multiple comparisons. Right: we show the weight matrix of this seRNN, showing how weights are patterned throughout the network.

Back to article page