Table 2 The adjoint spiking network to Table 1 that computes the adjoint variable \(\lambda _I\) needed for the gradient [Eq. (2)]. The adjoint variables are computed in reverse time (i.e., from \(t=T\) to \(t=0\)) with \('=-\frac{{\mathrm {d}}}{{\mathrm {d}}t}\) denoting the reverse time derivative. \((\lambda _V^-)_{n(k)} \) experiences jumps at the spikes times \(t^{\text {post}}_{k}\), where n(k) is the index of the neuron that caused the kth spike. Computing this system amounts to the backpropagation of errors in time. The initial conditions are \(\lambda _V(T)=\lambda _I(T)=0\) and we provide \(\lambda _V^-\) in terms of \(\lambda _V^+\) because the computation happens in reverse time
From: Event-based backpropagation can compute exact gradients for spiking neural networks
Free dynamics | Transition condition | Jump at transition |
|---|---|---|
\(\begin{aligned} \tau _\text {mem}\lambda _V'&= -\lambda _V - \frac{\partial l_V}{\partial V} \\ \tau _\text {syn}\lambda _I'&= -\lambda _I + \lambda _V \end{aligned}\) | \(\begin{aligned} t-t^{\text {post}}_{k}&= 0 \\ \text {for any }&k \end{aligned}\) | \(\begin{aligned} (\lambda _V^-)_{n(k)}&= (\lambda _V^+)_{n(k)} + \frac{1}{\tau _\text {mem}(\dot{V}^-)_{n(k)}} \bigg [\vartheta (\lambda _V^+)_{n(k)} \\&\quad + \left( W^\top (\lambda _V^+ - \lambda _I)\right) _{n(k)}+ \frac{\partial l_\text {p}}{\partial t^{\text {post}}_{k}} + l_V^- -l_V^+\bigg ] \end{aligned}\) |