Fig. 4: Bi-directional RNN (Bi-RNN) results on FSDD dataset. | Nature Communications

Fig. 4: Bi-directional RNN (Bi-RNN) results on FSDD dataset.

From: Self-Contrastive Forward-Forward algorithm

Fig. 4

a Training procedure of SCFF on a Bi-RNN. In the first stage, unsupervised training is performed on the hidden connections (both input-to-hidden and hidden-to-hidden transformations) using positive and negative examples. Positive examples are created by concatenating two identical MFCC feature vectors of a digit along the feature dimension, while negative examples are generated by concatenating MFCCs from two different digits (e.g., digit 3 and digit 8), as illustrated in the figure. At each time step, the features are sequentially fed into the Bi-RNN (forward RNN and backward RNN*). The red regions indicate features at different time steps. In the second stage, a linear classifier is trained using the final hidden states from both RNNs, i.e., HT and \({H}_{0}^{*}\) as inputs for the classification task. b Comparison of test accuracy for the linear classifier trained on Bi-RNN outputs. The yellow curve represents accuracy with untrained (random) hidden neuron connections, the blue curve shows results from training with SCFF, the green curve shows Backpropagation (BP) results. Error bars represent the mean ± standard deviation across three runs with different random seeds.

Back to article page