Figure 2 | Scientific Reports

Figure 2

From: Event-based backpropagation can compute exact gradients for spiking neural networks

Figure 2

Illustration of EventProp-based gradient calculation in two leaky integrate-and-fire neurons connected with weight w and a spike-time dependent loss \(\mathcal {L}\). The forward pass (B, C) computes the spike times for both neurons and the backward pass (DG) backpropagates errors at spike times, yielding the gradient as given in Eq. (2). (A) The upper neuron receives 100 independent Poisson spike trains with frequency \({200}\hbox { Hz}\) across randomly initialized weights and is connected to the lower neuron via a single weight w. The loss \(\mathcal {L}\) is a sum of the spike times of the lower neuron. (B, C) Membrane potential of upper and lower neuron. Spike times of the upper neuron are indicated using arrows. (D, E) Adjoint variable \(\lambda _I\) of upper and lower neuron. The lower neuron backpropagates its error signal \(\lambda _V-\lambda _I\) at the upper neuron’s spike times (indicated by arrows). (F, G) Accumulated gradient for one of the 100 input weights of the upper neuron and the weight w connecting the upper and lower neuron. EventProp computes the adjoint variables from \(t=T\) to \(t=0\) and accumulates the gradients by sampling \(-\tau _\text {syn}\lambda _I\) when spikes are transmitted across the respective weight. The gradients computed in this way match the gradients computed via central differences (dashed lines) up to a relative deviation of less than \(10^{-7}\)

Back to article page