Fig. 5: Brain-inspired modifications enable generative replay to scale to problems with many tasks. | Nature Communications

Fig. 5: Brain-inspired modifications enable generative replay to scale to problems with many tasks.

From: Brain-inspired replay for continual learning with artificial neural networks

Fig. 5

a The permuted MNIST protocol with 100 permutations. b When performed according to the Domain-IL scenario (i.e., no task labels available at test time), the best performing current method is synaptic intelligence (SI). Although standard generative replay (GR) outperforms SI for the first 10 tasks, its performance rapidly degrades after ~15 tasks. With our brain-inspired modifications (BI-R; see below), generative replay outperforms SI also after 100 tasks. Combining BI-R with SI results in a further boost in performance. Learning without forgetting (LwF) performs badly on this task protocol because between tasks the inputs are completely uncorrelated. Reported is average test accuracy based on all permutations so far. Displayed are the means over 5 repetitions, shaded areas are ±1 SEM. Joint: training using all data so far (`upper bound'), EWC elastic weight consolidation, None: sequential training in standard way.

Back to article page