Fig. 1: Task structure and seRNNs. | Nature Machine Intelligence

Fig. 1: Task structure and seRNNs.

From: Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Fig. 1

a, We use regularization to influence network structure during training to promote smaller network weights and hence a sparser connectome. b, Through regularization, we embed RNNs in Euclidean space by assigning units a location on an even 5 × 5 × 4 grid. We show a schematic of a six-node network in its space. c, We similarly embed RNNs in a topological space, guiding the pruning process towards efficient intra-network communication operationalized by a weighted communicability measure (see main text). The weighted communicability term is shown for the same network. d, When these constraints are placed within a joint regularization term, networks are incentivized to strengthen short connections, which are core to the networks topological structure, and weaken long connections, which are peripheral. Networks are generally incentivized to weaken connections while optimizing task performance. e, In the main study, we trained 1,000 L1-regularized RNNs as a baseline. L1 networks optimize task performance while minimizing the strength of their absolute weights (W). The network receives task inputs from an eight-unit-wide fully connected feed-forward layer and represents its choice as one of four choice units in the output layer. We compare these with 1,000 seRNNs, which include both Euclidean and topological constraints in their regularization term, by multiplying the weight matrix (W) by its Euclidean distance (D) and weighted communicability (C). Elements of the resulting matrix are summed, forming the structural loss. We minimize the sum of the task loss and the structural loss. To the right, we show the evolution of W, D and C matrices over training. f, Networks solve a one-step inference task starting with a period of twenty steps where the goal is presented in one of four locations on a grid: top/bottom, left/right (depicted in light blue). Subsequently, there is a ten-step delay where the goal location must be memorized. Then two choice options are provided for twenty steps. Using prior goal information, agents must choose the option closer to the goal. In this example, given left and right options, the correct decision is to select right.

Back to article page