Fig. 5: Comparison of SQHN architectures with different numbers of hidden layers.
From: A sparse quantized hopfield network for online-continual memory

A Depiction of various SQHN architectures. B Recall accuracy across three data sets (CIFAR-100, Tiny Imagenet, Caltech 256) under the white noise and several occlusion scenarios. C Visualization of the black, color, and noise occlusions. D Recall accuracy during online training without noise (top row) and MSE on a test a test set (bottom row). The lower bound on recall accuracy posited by theorem 2 (supplementary note 6) is marked by a gray line. Models tested with different maximum number of neurons per node (200, 600, 1000). E Recognition accuracy for three SQHN models, with and without noise. 500 neurons are allocated to each node (vertical dotted line marks when all 500 neurons are grown). CIFAR-100 data was used for train and in-distribution set, while a flipped pixel version of the Street View House Numbers (SVHN) dataset was used for out-of-distribution. The best guessing strategy yields 66% accuracy (horizontal dotted line).