Fig. 6: Brain-inspired modifications enable generative replay to scale to problems with complex inputs. | Nature Communications

Fig. 6: Brain-inspired modifications enable generative replay to scale to problems with complex inputs.

From: Brain-inspired replay for continual learning with artificial neural networks

Fig. 6

a The split CIFAR-100 protocol performed according to two different scenarios. b In the task-incremental learning scenario (Task-IL), most continual learning methods are successful, although standard generative replay (GR) performs even worse than the naive baseline. But with our brain-inspired modifications (see below), generative replay outperforms the other methods. c In the class-incremental scenario (Class-IL), no existing continual learning method that does not store data is able to prevent catastrophic forgetting. Our brain-inspired replay (BI-R; see below), especially when combined with synaptic intelligence (SI), does achieve reasonable performance on this challenging, unsolved benchmark. Reported are average test accuracies based on all tasks/classes so far. Displayed are the means over 10 repetitions, shaded areas are ±1 SEM. d Examples of images replayed with standard generative replay during training on the final task. Joint: training using all data so far (`upper bound'), LwF learning without forgetting, EWC elastic weight consolidation, None: sequential training in standard way.

Back to article page