Fig. 6: Temporal feature detection in adLIF networks. | Nature Communications

Fig. 6: Temporal feature detection in adLIF networks.

From: Advancing spatio-temporal processing through adaptation in spiking neural networks

Fig. 6

a Two samples of classes 2 and 17 of the burst sequence detection (BSD) task (see main text). b Classification error of adLIF and LIF networks with equal parameter count for different numbers of classes in the BSD task. c Schematic illustration of network feature visualization. An initial noise sample x0 is passed through a trained network with frozen network parameters. The classification loss of the network output with respect to some predefined target class c is computed and back-propagated through the network to obtain the gradient \({\nabla }_{X}L(X,c){| }_{{X}^{0}}\) of the loss with respect to input X0. This gradient is applied to the sample and the procedure is repeated to obtain a final sample XK after K = 400 iterations. d Samples generated by the feature visualization procedure from panel c from networks trained on the 20-class BSD task. White dots denote the locations of the class-descriptive bursts. We generated samples for classes 2 and 17 which were the most misclassified classes of LIF and adLIF networks respectively. e Samples generated from an adLIF network and a LIF network trained on SHD for different target classes c (top) and the corresponding network output over time (bottom). The gray shaded area (at t > 100) denotes the time span relevant for the loss, all outputs before this time span were ignored, see “Methods” for details.

Back to article page