Fig. 5: Training at the edge with a hybrid FeCAP/memristor array. | Nature Electronics

Fig. 5: Training at the edge with a hybrid FeCAP/memristor array.

From: A ferroelectric–memristor memory for both training and inference

Fig. 5

a, Diagram showing the fully connected three-layer neural network used for the on-chip training on the MNIST digit classification task. b, Representation of the on-chip training procedure based on the stochastic gradient descent algorithm. The FeCAP array used to store the hidden weights is stochastically updated with input of each new image, whereas the memristor array used to store the analogue weights is updated after input of every k images. c, Accuracy after ten training epochs as a function of the k parameter. The dot plots represent ten individual data points from separate training runs. The inset shows the mean accuracy values with the corresponding minimum and maximum ranges. d, Number of memristor and FeCAP programming operations at the end of one training round as a function of k. e, Total programming energy for memristors, and FeCAPs at the end of one training round as a function of k. We assumed a programming energy of 1 pJ and 100 fJ for a single memristor and FeCAP, respectively47,48. f, MNIST classification, Fashion-MNIST classification and ECG detection accuracies obtained with the FP and our proposed training strategy based on HM in both ideal memory case (w/o Var) and with artificially induced errors in the transfer operation due to device variability (w/ Var). For the MNIST dataset, we also include a case in which the network was trained with ten-image mini-batches, with batch normalization (Methods). The bar chart represents the mean value, and the dot plots show the corresponding individual data points (n = 5).

Back to article page