Fig. 4: Generative replay is remarkably efficient and robust. | Nature Communications

Fig. 4: Generative replay is remarkably efficient and robust.

From: Brain-inspired replay for continual learning with artificial neural networks

Fig. 4

It is possible to substantially reduce the quantity or the quality of replay without severely affecting performance. a, b Shown is the average test accuracy (based on all tasks/digits) of generative replay on the split MNIST protocol performed according to the task-incremental learning scenario (Task-IL; left) and the class-incremetnal learning scenario (Class-IL; right), both a as a function of the total number of replayed samples per mini-batch and b as a function of the number of units in the hidden layers of the variational autoencoder (VAE) used for generating replay. As a control, also shown is a variant of generative replay whereby the networks are reinitialized before each new task/episode. For comparison, on the left of each graph the average test accuracy of the other methods is indicated (see also Fig. 3). Displayed are the means over 20 repetitions, shaded areas are ±1 SEM. Panel c shows random samples from the generative model after finishing training on the fourth task (i.e., examples of what is replayed during training on the final task) for a VAE with 10, 100 and 1000 units per hidden layer, illustrating the low quality of the samples being replayed. None: sequential training in standard way, SI synaptic intelligence, XdG context-dependent gating, EWC elastic weight consolidation, LwF learning without forgetting.

Back to article page