Fig. 3 | Scientific Reports

Fig. 3

From: Training convolutional neural networks with the Forward–Forward Algorithm

Fig. 3

The best MNIST performance of an FF-trained CNN architecture is comparable to the results of a backpropagation trained CNN of the same architecture. (a) The accuracy values obtained for CNN with three convolutional layers as a function of the number of filters in each layer, after being trained for 200 epochs with batch size 50. Filter size is 7 times 7, the learning rate was set to the respective optimal value of \(5 x 10^{-5}\) for FF and \(10^{-3}\) for BP. FF trained networks used labels from set 1 and a label intensity K of 35%. The values reported for BP and FF are gathered from the validation data. The green data points shows the results related to the FF trained network, with inference using the goodness comparison. In this scenario, 99.16± 0.02% accuracy was achieved with 128 filters per layer using the test data as shown by the corresponding confusion matrix reported in (b). (c) shows the loss computed for the discrimination between positive and negative training data for each hidden layer contributing to the training (red and blue lines), and the combined loss used during training (green line). (d) displays the discrimination accuracy of the same hidden layers (red and blue lines), and the total accuracy obtained during training (green line).

Back to article page