Fig. 8: Addition- and ablation-experiments to tear apart the contributions of individual modifications.
From: Brain-inspired replay for continual learning with artificial neural networks

The average overall accuracy is shown for standard generative replay (GR) with individual modifications added (`+', left) and for brain-inspired replay (BI-R) with individual modifications removed (`−', right) for a permuted MNIST with 100 permutations, b the task-incremental learning scenario on CIFAR-100 and c the class-incremental learning scenario on CIFAR-100. Note that internal replay was not used for permuted MNIST, as no convolutional layers were used for this protocol. Each bar reflects the mean over 5 (permuted MNIST) or 10 (CIFAR-100) repetitions, error bars are ± 1 SEM, individual repetitions are indicated by dots. Dotted grey lines indicate chance level. Solid black lines show performance when the base network is trained only on the final task/episode (mean over 5 or 10 repetitions, shaded areas are ±1 SEM), which can be interpreted as chance performance on all but the last seen data. rtf replay-through-feedback, con conditional replay, gat gating based on internal context, int internal replay, dis distillation.