Fig. 5: Fine-tuning for noise robustness, quantization, and latency.
From: High-performance deep spiking neural networks with 0.3 spikes per neuron

VGG16 SNN on CIFAR10. In all cases after only 10 epochs of fine-tuning (w/ FT—purple squares) the accuracy of the initially mapped network (w/o FT—blue circles) is significantly improved. a Accuracy as a function of the standard deviation (SD) of random noise values added to each spike time in the network (spiking time jitter). b Quantizing spiking times in the network to a given number of time steps per layer. c Representing all weights \({W}_{ij}^{(n)}\) with given number of bits. d Reducing the latency by reducing the ranges \([{t}_{\min }^{(n)},\, {t}_{\max }^{(n)})\).