Fig. 3: Sequence anticipation and recall in a network with recurrent connectivity.
From: Sequence anticipation and spike-timing-dependent plasticity emerge from a predictive learning rule

a In this example, we simulated a network of 10 neurons with nearest-neighbor recurrent connectivity, that is, each neuron n in the network received inputs from the n − 1-th and n + 1-th adjacent neurons. The first and the last neuron only received inputs from the second and second last neurons in the network, respectively. Shown are the connections to the second neuron. Each neuron in the network received inputs from 8 pre-synaptic neurons that fire sequentially with relative delays of 2 ms, resulting in a total sequence length of 16 ms (pink spike pattern). The sequence onset of pre-synaptic inputs for the n + 1-th neuron started 4 ms after the sequence onset for the n-th neuron in the network, etc. Each epoch contained two different sources of noise: (1) random jitter of the spikes in the sequence (between −2 and 2 ms); random background firing of the pre-synaptic neurons according to a homogeneous Poisson process with rate λ = 10 Hz. Both the connections from the pre-synaptic neurons to the neurons in the network and the connections between the neurons in the network were plastic and modified according to the predictive learning rule described in the main text. b Raster plot of the network’s activity during different epochs of training: (1) The “before” case, where only the pre-synaptic neurons corresponding to the first neuron in the network exhibited sequential firing. In this case, the background stochastic firing was still present in all the 8 × 10 = 80 pre-synaptic neurons. (2) The “learning” or conditioning case, where we presented the entire sequence (which was repeated 2000 times). (3) The “after” or “recall” condition, which was the same as the before condition (now after learning). (4) Same as (3), but an example where spontaneous recall occurs due to the background stochastic firing. The neurons are ordered as in panel a. c The synaptic weights matrix obtained at the end of training (epoch 1000). Top: The i-th column corresponds to the synaptic weights learned by the i-th neuron in the network, where the 8 entries correspond to the synaptic weights for the pre-synaptic inputs. Bottom: the nearest-neighbor connections in the network towards the i-th neurons. Note that the first and last neurons do not receive inputs from the n − 1-th and n + 1-th neurons, respectively. d Evolution of the duration of network activity across epochs. We computed the temporal difference between the last spike of the last neuron and the first spike of the first neuron to estimate the total duration of the network’s activity. We computed the average duration and the standard deviation from 100 simulations with different stochastic background firing and random jitter of the spike times.