Fig. 2: Performance comparison on temporal processing tasks. | Nature Communications

Fig. 2: Performance comparison on temporal processing tasks.

From: Efficient and robust temporal processing with neural oscillations modulated spiking neural networks

Fig. 2

a Performance of Rhythm-SNNs versus non-Rhythm counterparts on the PS-MNIST and ECG datasets. b Performance of Rhythm-SNNs and non-Rhythm counterparts on the DVS-Gesture dataset, with input sequence lengths ranging from 500 to 1500. For both (a) and (b), the experiments were conducted over three runs with different random seeds, and the error bars represent the standard deviation. c Normalized temporal gradients for all hidden neurons in FFSNN, ASRNN, and their Rhythm-SNN counterparts, using a mini-batch from the PS-MNIST dataset. Rhythm-SNNs can effectively allocate more gradients to earlier time steps, facilitating the learning of long-range temporal dependencies. d Learning curves for Rhythm-SNNs and non-Rhythm counterparts under identical training conditions. Solid lines represent mean accuracies, while shaded areas indicate the standard deviation of accuracy across four runs with different random initializations. e Energy costs and corresponding accuracy of different models on the PS-MNIST dataset. The number next to the circle point of the vanilla model represents its energy cost ratio relative to its rhythmic counterpart. f Layer-wise firing rate comparison across different models depicted in (e).

Back to article page