Fig. 3: Replay might be required for artificial neural networks to incrementally learn new classes.
From: Brain-inspired replay for continual learning with artificial neural networks

a The split MNIST task protocol performed according to two different scenarios. b In the task-incremental learning scenario (Task-IL), all compared continual learning methods perform very well. c In the class-incremental learning scenario (Class-IL), only generative replay (GR) prevents catastrophic forgetting. Reported is the average test accuracy based on all tasks/digits so far. Displayed are the means over 20 repetitions, shaded areas are ±1 SEM. Joint: training using all data so far (`upper bound'), LwF learning without forgetting, SI synaptic intelligence, XdG context-dependent gating, EWC elastic weight consolidation, None: sequential training in standard way.